Browsing by Subject "Models"
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item The academician-practitioner gap : the past, present and future(2011-12) Lai, Jocelyn Shiuan; Wilcox, Gary B.; Burns, Neal M.The academician-practitioner gap has been long discussed within the advertising community. There has been extensive literature published, numerous organizations formed and changes made in the academic and practitioner worlds, all to address and help close the gap. This report is an analysis of the academician-practitioner gap, what it entails, why it exists and what must be in place to begin properly reducing the gap.Item Analyses of infectious disease data with attention to heterogeneity(2013-08) O'Dea, Eamon Brendin; Wilke, C. (Claus); Meyers, Lauren AncelThis work comprises three projects that extend previous models to include features of practical significance for the statistical analysis of infectious disease data. In the first, we find from a simulation study how the degree of heterogeneity in the number contacts that individuals have affects the relationship between estimates of a pathogen's effective population size based on coalescent theory and the true prevalence and incidence of that pathogen. In the second, we find that aggregating data from many small outbreaks allows the parameters of stochastic epidemic models to be consistently estimated with a generalized linear model. Application of this method to a set of 77 small norovirus outbreaks reveals interesting differences in the transmission parameters between hospital and nursing-home outbreaks. In the third project, we gain insight into HIV contact networks in the United States by fitting data from a number of surveys to a simple stochastic model of a dynamic network.Item Applications of Hamiltonian theory to plasma models(2016-05) Keramidas Charidakos, Ioannis; Morrison, Philip J.; Waelbroeck, F.; Horton, Wendell C.; Hazeltine, Richard; Fitzpatrick, Richard; Gamba, Irene M.Three applications of Hamiltonian Methods in Plasma Physics are presented. The first application is the development of a new, five-field, Hamiltonian gyrofluid model. It is comprised by evolution equations for the ion density, pressure and parallel temperature and electron density and pressure. It contains curvature and compressibility effects. The model is shown to satisfy a conserved energy and a Lie-Poisson bracket for it is given. Casimir invariants are calculated and through them, the normal fields of the system are recovered. Later, the model is linearized and shown to possess modes that are identified with the slab ITG, toroidal ITG and KBM modes. Both an electrostatic and an electromagnetic study are performed. Growth rates and critical parameters for instability are computed and compared to their fluid and kinetic counterparts. The accuracy of the model is shown to be between the fluid and the kinetic results, as was expected. Dissipation is added to the ideal system via the use of non-local terms that mimic Landau damping. The modes of the system are shown to undergo Krein bifurcations and their behavior once dissipation is turned on, strongly suggests that they are negative energy modes. A connection between the marginal stability condition of the ITG mode at high k┴ and the (missing) equation of perpendicular pressure is conjectured opening an interesting possibility for future research. The second application is a method for the derivation of reduced fluid models through the use of an action principle. The importance of the method lies in the fact that since all approximations are made directly at the level of the action, the models that result from the action minimization are guaranteed to retain the Hamiltonian character of their parent-model. The two-fluid action is given in Lagrangian variables and the two-fluid equations of motion are recovered by it's minimization. The Eulerian (field) equations of motion are retrieved through the Lagrange-to-Euler (L-E) map. New, single-fluid variables are defined but instead of being implemented at the level of the equations of motion, they are implemented directly in the action. The action is subjected to approximations. Different approximations lead to different models with the models of Lust, Extended MHD, Hall MHD and electron MHD being retrieved. The passing from Lagrangian to Eulerian variables in the single-fluid description requires a non-trivial modification of the E-L map. A note about the importance of quasineutrality in single-fluid models and its ramifications in the Lagrangian framework is given. Several invariants of the models are calculated via Noethers' Theorem. The third application concerns the imposition of constraints in Hamiltonian systems. Two worked examples of the method of Dirac are presented. The first one is on an electrostatic model which has the Hasegawa-Mima and RMHD as distinct limits. The constraint that leads to the Hasegawa-Mima is investigated. The calculations are demonstrated in detail and the reduced system is produced. A brief discussion of the dispersion relation of the reduced system concludes the first example. The second example is the imposition of quasineutrality and divergence-free current on the bracket of the two-fluid model. The various steps of the method are displayed and the example is completed with the verification that the new bracket satisfies the constraints. The possibility of performing the same calculation with single-fluid variables remains open for future research.Item Between 3-D Computer Models and 3-D Physical Models: People?s Understanding and Preference(2014-12-16) Jiang, YinGood communication between architects and clients is an important factor for a successful architectural project. It is critical for architects to present their design ideas effectively and unambiguously to reduce or eliminate their clients? misunderstanding. For people who are not professionally trained in architecture, a three-dimensional (3-D) model is one of the most effective medium of communication. The purpose of this study is to compare laypeople?s understanding and preference of digital and physical models, how these models are used in design practice and how architects evaluate their client?s understanding and preference. In such context, this research study consisted of a quantitative phase and a qualitative phase. The quantitative part of the study compared desktop-Based interactive 3-D architectural models to physical models by investigating laypeople's understanding of spatial layout and their preferences regarding these two models. An office complex and a single-family residence building were designed, and each type was represented by both physical and digital forms with the same level of detail. Participants were asked to memorize the building components and reassemble them Based on their memory. The qualitative phase involved a series of semi-structured interviews with eight experienced design professionals, its aim was to collect their opinions about how they perceive their clients' preferences and understandings of these two types of models during their practice. The data from both phases were analyzed. In general, Results from the quantitative phase reveals that laypeople who studied physical models performed their tasks significantly better than those studied digital models. The qualitative phase discusses architects? choice of models, the factors that drive their decisions, the communication with clients, and their clients? understanding of those models.Item Computational studies of electron transport and reaction rate models for argon plasma(2010-08) Min, Timothy T.; Raja, Laxminarayan L.; Hallock, GaryA validation study was performed on a capacitively coupled argon discharge to determine the most suitable models for chemistry and electron transport. Chemical reaction rate and electron transport models choices include equilibrium or non-equilibrium electron EDFs. Experimental studies performed by our collaborative partners in the Colorado School of Mines. Conditions for the studies are 138, 315, and 618 mTorr where the cycle averaged power varied at 20, 50, and 80 Watts in which the voltage supply was driven at 13.56 MHz. Simulations were performed using pressures and voltage used in experiments. The most accurate case was for 138 mTorr at 50 Watts using a non-Maxwellian EDF based chemistry (called Bolsig+ chemistry) and a constant electron momentum transfer cross section of 20 Angstroms which was computed from Boeuf’s paper; this model accurately modeled power deposition to within 2.6%. Furthermore, species number densities, electron temperature, and sheath thicknesses are obtained. Using Bolsig+ chemistry resulted in 20,000K higher electron temperatures than using Arrhenius chemistry rates. Results indicate that power deposition occurs due to electrons gaining energy from the sheath which in turn bombard neutral species producing metastable argon.Item Computer modeling of the instructionally insensitive nature of the Texas Assessment of Knowledge and Skills (TAKS) exam(2009-08) Pham, Vinh Huy, 1979-; Stroup, Walter M.Stakeholders of the educational system assume that standardized tests are transparently about the subject content being tested and therefore can be used as a metric to measure achievement in outcome-based educational reform. Both analysis of longitudinal data for the Texas Assessment of Knowledge and Skills (TAKS) exam and agent based computer modeling of its underlying theoretical testing framework have yielded results that indicate the exam only rank orders students on a persistent but uncharacterized latent trait across domains tested as well as across years. Such persistent rank ordering of students is indicative of an instructionally insensitive exam. This is problematic in the current atmosphere of high stakes testing which holds teachers, administrators, and school systems accountable for student achievement.Item The construction and use of physics-based plasticity models and forming-limit diagrams to predict elevated temperature forming of three magnesium alloy sheet materials(2013-08) Antoniswamy, Aravindha Raja; Taleff, Eric M.Magnesium (Mg) alloy sheets possess several key properties that make them attractive as lightweight replacements for heavier ferrous and non-ferrous alloy sheets. However, Mg alloys need to be formed at elevated temperatures to overcome their limited room-temperature formabilities. For example, commercial forming is presently conducted at 450°C. Deformation behavior of the most commonly used wrought Mg alloy, AZ31B-H24, and two potentially competitive materials, AZ31B-HR and ZEK100 alloy sheets, with weaker crystallographic textures, are studied in uniaxial tension at 450°C and lower temperatures. The underlying physics of deformation including the operating deformation mechanisms, grain growth, normal and planar anisotropy, and strain hardening are used to construct material constitutive models capable of predicting forming for all three Mg alloy sheets at 450°C and 350°C. The material models constructed are implemented in finite-element-method (FEM) simulations and validated using biaxial bulge forming, an independent testing method. Forming limit diagrams are presented for the AZ31B-H24 and ZEK100 alloy sheets at temperatures from 450°C down to 250°C. The results suggest that forming processes at temperatures lower than 450°C are potentially viable for manufacturing complex Mg components.Item Design and comparison of single crystal and ceramic Tonpilz transducers(2010-08) Nguyen, Kenneth Khai; Haberman, Michael R.; Wilson, Preston S.; Hall, Neal A.Transducers utilizing single crystal piezoelectrics as the active elements have been shown to exhibit broader operating bands, higher response levels, and higher power efficiency than transducers using piezoceramics while also reducing the size and mass of the transducer (Moffett et al., J. Acoust. Soc. Am., 2007). The key to these high performance characteristics is the piezocrystal's inherent high electromechanical coupling coefficient. One potential application is to replace multiple narrowband piezoceramic transducers with a single broadband piezocrystal transducer which reduces the system's weight and size. This is very important for the new generation of smaller and power efficient unmanned underwater vehicles (UUVs). A third application is for use in very broadband communication networks. The work presented here focuses specifically on the design, modeling, and construction of Tonpilz transducers using piezoelectrics as the active material. The modeling includes lumped element and finite element analysis to approximate the performance of these transducers. These models serve as the main structure of an overall iterative design process. The objective of this research is to compare the performance characteristics of a piezocrystal and a piezoceramic Tonpilz transducer and to validate the models by comparing the model predictions with experimental results.Item Development and assessment of models for predicting the phytoplankton assemblage patterns in lake kemp(Texas Tech University, 2005-05) Shuck, Jesse Paulson; Wilde, Gene R.; Strauss, Richard E.; Pope, Kevin L.Phytoplankton are an essential component of aquatic systems. Despite their microscopic size, these organisms are responsible for the majority of primary production that takes place in lakes and reservoirs. The development of predictive models has been successful in predicting certain aspects of phytoplankton communities, but none have been able to predict the simple assemblage patterns found in these communities.Item Development and assessment of models for predicting the phytoplankton assemblage patterns in Lake Kemp(2005-05) Shuck, Jesse Paulson; Wilde, Gene R.; Strauss, Richard E.; Pope, Kevin L.Phytoplankton are an essential component of aquatic systems. Despite their microscopic size, these organisms are responsible for the majority of primary production that takes place in lakes and reservoirs. The development of predictive models has been successful in predicting certain aspects of phytoplankton communities, but none have been able to predict the simple assemblage patterns found in these communities.Item Development of a suction detection system for a motorized pulsatile blood pump(2010-08) Adnadjevic, Djordje; Longoria, Raul G.; Djurdjanovic, DraganA computational model has been developed to study the effects of left ventricular assist devices (LVADs) on the cardiovascular system during a ventricular collapse. The model consists of a toroidal pulsatile blood pump and a closed loop circulatory system. Together, they predict the pump's motor current traces that reflect ventricular suck-down and provide insights into torque magnitudes that the pump experiences. In addition, the model investigates likeliness of a suction event and predicts reasonable outcomes for a few test cases. Ventricular collapse was modeled with the help of a mock circulatory loop consisting of a artificial left ventricle and centrifugal continuous flow pump. This study also investigates different suction detection schemes and proposes the most suitable suction detection algorithm for the TORVAD pump, toroidal left ventricular assist device. Model predictions were further compared against the data sampled during in vivo animal trials with the TORVAD system. The two sets of results are in good accordance.Item Fluid description of relativistic, magnetized plasmas with anisotropy and heat flow : model construction and applications(2009-08) TenBarge, Jason Michael; Hazeltine, R. D. (Richard D.)Many astrophysical plasmas and some laboratory plasmas are relativistic: either the thermal speed or the local bulk flow in some frame approaches the speed of light. Often, such plasmas are magnetized in the sense that the Larmor radius is smaller than any gradient scale length of interest. Conventionally, relativistic MHD is employed to treat relativistic, magnetized plasmas; however, MHD requires the collision time to be shorter than any other time scale in the system. Thus, MHD employs the thermodynamic equilibrium form of the stress tensor, neglecting pressure anisotropy and heat flow parallel to the magnetic field. We re-examine the closure question and find a more complete theory, which yields a more physical and self-consistent closure. Beginning with exact moments of the kinetic equation, we derive a closed set of Lorentz-covariant fluid equations for a magnetized plasma allowing for pressure and heat flow anisotropy. Basic predictions of the model, including its thermodynamics and the dispersion relation's dependence upon relativistic temperature, are examined. Further, the model is applied to two extant astrophysical problems.Item A mixed-integer model for optimal grid-scale energy storage allocation(2010-08) Harris, Chioke Bem; Meyers, Jeremy P.; Webber, Michael E., 1971-To meet ambitious upcoming state renewable portfolio standards (RPSs), respond to customer demand for “green” electricity choices and to move towards more renewable, domestic and clean sources of energy, many utilities and power producers are accelerating deployment of wind, solar photovoltaic and solar thermal generating facilities. These sources of electricity, particularly wind power, are highly variable and difficult to forecast. To manage this variability, utilities can increase availability of fossil fuel-dependent backup generation, but this approach will eliminate some of the emissions benefits associated with renewable energy. Alternately, energy storage could provide needed ancillary services for renewables. Energy storage could also support other operational needs for utilities, providing greater system resiliency, zero emission ancillary services for other generators, faster responses than current backup generation and lower marginal costs than some fossil fueled alternatives. These benefits might justify the high capital cost associated with energy storage. Quantitative analysis of the role energy storage can have in improving economic dispatch, however, is limited. To examine the potential benefits of energy storage availability, a generalized unit commitment model of thermal generating units and energy storage facilities is developed. Initial study will focus on the city of Austin, Texas. While Austin Energy’s proximity to and collaborative partnerships with The University of Texas at Austin facilitated collaboration, their ambitious goal to produce 30-35% of their power from renewable sources by 2020, as well as their continued leadership in smart grid technology implementation makes them an excellent initial test case. The model developed here will be sufficiently flexible that it can be used to study other utilities or coherent regions. Results from the energy storage deployment scenarios studied here show that if all costs are ignored, large quantities of seasonal storage are preferred, enabling storage of plentiful wind generation during winter months to be dispatched during high cost peak periods in the summer. Such an arrangement can yield as much as $94 million in yearly operational cost savings, but might cost hundreds of billions to implement. Conversely, yearly cost reductions of $40 million can be achieved with one CAES facility and a small fleet of electrochemical storage devices. These results indicate that small quantities of storage could have significant operational benefit, as they manage only the highest cost hours of the year, avoiding the most expensive generators while improving utilization of renewable generation throughout the year. Further study using a modified unit commitment model can help to narrow the performance requirements of storage, clarify optimal storage portfolios and determine the optimal siting of this storage within the grid.Item Model eliciting activities : an assessment framework in a middle school science context(2011-08) Tasneem, Tania; Walker, Mary H.; Marshall, Jill AnnThis work stems from the fact that objectively assessing student “mastery” of science concepts without truly understanding how they are making sense of these concepts, continues to be one of the most difficult tasks I face as an educator. A model eliciting activity (MEA) is an instructional tool that provides students and teachers with plenty of opportunities to express, test, and refine their thinking while simultaneously providing a document trail of thinking. Model eliciting activities allow teachers, students, and researchers to gain valuable information about how students construct, test, and revise models. Essentially, they are rich metacognitive tools that encourage students to express and refine their own thinking while simultaneously providing an opportunity for teachers and students themselves to gain insight on how their students are learning. However, two difficulties arise in the implementation of MEAs: (1) assessing the quality of the tasks involved in MEAs, and (2) assessing student knowledge demonstrated through MEAs (Wang et.al., 2009). This report reviews the literature on assessing MEAs and focuses on the development of a generalized assessment framework for model eliciting activities in a middle school science context.Item Modeling of multiphase behavior for gas flooding simulation(2009-08) Okuno, Ryosuke, 1974-; Johns, Russell T.; Sepehrnoori, Kamy, 1951-Miscible gas flooding is a common method for enhanced oil recovery. Reliable design of miscible gas flooding requires compositional reservoir simulation that can accurately predict the fluid properties resulting from mass transfer between reservoir oil and injection gas. Drawbacks of compositional simulation are the efficiency and robustness of phase equilibrium calculations consisting of flash calculations and phase stability analysis. Simulation of multicontact miscible gas flooding involves a large number of phase equilibrium calculations in a near-critical region, where the calculations are time-consuming and difficult. Also, mixtures of reservoir oil and solvent such as CO₂ and rich gas can exhibit complex phase behavior at temperatures typically below 120°F, where three hydrocarbon-phases can coexist. However, most compositional simulators do not attempt to solve for three hydrocarbon-phases because three-phase equilibrium calculations are more complicated, difficult, and time-consuming than traditional two-phase equilibrium calculations. Due to the lack of robust algorithms for three-phase equilibrium calculations, the effect of a third hydrocarbon-phase on low-temperature oil displacement is little known. We develop robust and efficient algorithms for phase equilibrium calculations for two and three phases. The algorithms are implemented in a compositional reservoir simulator. Simulation case studies show that our algorithms can significantly decrease the computational time without loss of accuracy. Speed-up of 40% is achieved for a reservoir simulation using 20 components, compared to standard algorithms. Speed-up occurs not only because of improved computational efficiency but also because of increased robustness resulting in longer time-step sizes. We demonstrate the importance of three-phase equilibrium calculations, where simulations with two-phase equilibrium approximations proposed in the literature can result in complete failure or erroneous simulation results. Using the robust phase equilibrium algorithms developed, the mechanism is investigated for high efficiency of low-temperature oil displacements by CO₂ involving three hydrocarbon-phases. Results show that high displacement efficiency can be achieved when the composition path goes near the critical endpoint where the gaseous and CO₂-rich liquid phases merge in the presence of the oleic phase. Complete miscibility may not be developed for three-phase flow without considering the existence of a tricritical point.Item Probabilistic bicriteria models : sampling methodologies and solution strategies(2010-08) Rengarajan, Tara; Morton, David P.; Hasenbein, John J.; Kutanoglu, Erhan; Muthuraman, Kumar; Popova, ElmiraMany complex systems involve simultaneous optimization of two or more criteria, with uncertainty of system parameters being a key driver in decision making. In this thesis, we consider probabilistic bicriteria models in which we seek to operate a system reliably, keeping operating costs low at the same time. High reliability translates into low risk of uncertain events that can adversely impact the system. In bicriteria decision making, a good solution must, at the very least, have the property that the criteria cannot both be improved relative to it. The problem of identifying a broad spectrum of such solutions can be highly involved with no analytical or robust numerical techniques readily available, particularly when the system involves nontrivial stochastics. This thesis serves as a step in the direction of addressing this issue. We show how to construct approximate solutions using Monte Carlo sampling, that are sufficiently close to optimal, easily calculable and subject to a low margin of error. Our approximations can be used in bicriteria decision making across several domains that involve significant risk such as finance, logistics and revenue management. As a first approach, we place a premium on a low risk threshold, and examine the effects of a sampling technique that guarantees a prespecified upper bound on risk. Our model incorporates a novel construct in the form of an uncertain disrupting event whose time and magnitude of occurrence are both random. We show that stratifying the sample observations in an optimal way can yield savings of a high order. We also demonstrate the existence of generalized stratification techniques which enjoy this property, and which can be used without full distributional knowledge of the parameters that govern the time of disruption. Our work thus provides a computationally tractable approach for solving a wide range of bicriteria models via sampling with a probabilistic guarantee on risk. Improved proximity to the efficient frontier is illustrated in the context of a perishable inventory problem. In contrast to this approach, we next aim to solve a bicriteria facility sizing model, in which risk is the probability the system fails to jointly satisfy a vector-valued random demand. Here, instead of seeking a probabilistic guarantee on risk, we instead seek to approximate well the efficient frontier for a range of risk levels of interest. Replacing the risk measure with an empirical measure induced by a random sample, we proceed to solve a family of parametric chance-constrained and cost-constrained models. These two sampling-based approximations differ substantially in terms of what is known regarding their asymptotic behavior, their computational tractability, and even their feasibility as compared to the underlying "true" family of models. We establish however, that in the bicriteria setting we have the freedom to employ either the chance-constrained or cost-constrained family of models, improving our ability to characterize the quality of the efficient frontiers arising from these sampling-based approximations, and improving our ability to solve the approximating model itself. Our computational results reinforce the need for such flexibility, and enable us to understand the behavior of confidence bounds for the efficient frontier. As a final step, we further study the efficient frontier in the cost versus risk tradeoff for the facility sizing model in the special case in which the (cumulative) distribution function of the underlying demand vector is concave in a region defined by a highly-reliable system. In this case, the "true" efficient frontier is convex. We show that the convex hull of the efficient frontier of a sampling-based approximation: (i) can be computed in strongly polynomial time by relying on a reformulation as a max-flow problem via the well-studied selection problem; and, (ii) converges uniformly to the true efficient frontier, when the latter is convex. We conclude with numerical studies that demonstrate the aforementioned properties.Item Sediment transport dynamics in the lower Mississippi River : non-uniform flow and its effects on river-channel morphology(2010-12) Nittrouer, Jeffrey Albert; Mohrig, DavidThis dissertation examines the dynamics of sediment transport and channel morphology in the lower Mississippi River. The area of research includes the portion of the river where reach-averaged downstream flow velocity responds to the boundary condition imposed by the relatively uniform water-surface elevation of the receiving basin. Observational studies provided data that are used to identify channel-bed sediment composition, and measure bed-material sediment flux and the properties of the fluid-flow field over a variety of water-discharge conditions. The analyses demonstrate that a significant portion of the channel bed of the final 165 kilometers of the Mississippi River consists of exposed and eroding underlying relict sedimentary strata that qualify as surrogate bedrock. The exposed bedrock is confined to the channel thalweg, particularly in river-bend segments, and actively mobile bed-material sediments are positioned on subaqueous bars fixed by river planform. The analyses for sediment flux provides insight to the nature of sediment transport: during low- and moderate-water discharge, bed-material movement occurs primarily as minimal bedform flux, and so bed materials are not transferred between alluvial bars. During high-water discharge, bed-material transport increases one-hundred fold, and sands move as a part of both suspended and bedform transport. Physical models are used to show that skin-friction shear stress increases by a factor of ten for the measured water-discharge range. This change is not possible given conditions of uniform water flow, and therefore non-uniform flow in response to the Mississippi River approaching its outlet has a significant impact on the timing and magnitude of sediment flux through the lower river. In order to estimate the dynamics of bed material movement from the uniform to non-uniform segment of the river (lower 800 km), data for channel morphology are used to construct a model that predicts spatial changes in water-flow velocity and bed-material flux over a range of water-discharge conditions. The model demonstrates that non-uniform flow tends to produce a region of net channel-bed aggradation between 200-700 kilometers above the outlet, and a region of channel-bed degradation for the final 200. The implication for these results for the spatial variability of channel morphology and kinematics is explored.Item Training experience satisfaction prediction based on trainees' general information(2010-08) Huang, Hsiu-Min Chang, 1958-; Ghosh, Joydeep; Graser, ThomasTraining is a powerful and required method to equip human resources with tools to keep their organizations competitive in the markets. Typically at the end of class, trainees are asked to give their feelings about or satisfaction with the training. Although there are various reasons for conducting training evaluations, the common theme is the need to continuously improve a training program in the future. Among training evaluation methods, post-training surveys or questionnaires are the most commonly used way to get trainees’ reaction about the training program and “the forms will tell you to what extent you’ve been successful.” (Kirkpatrick 2006) A higher satisfaction score means more trainees were satisfied with the training. A total of 40 prediction models grouped into 10-GIQs prediction models and 6-GIQs prediction models were built in this work to predict the total training satisfaction based on trainees’ general information which included a trainee’s desire to take training, a trainee’s attitude in training class and other information related to the trainee’s work environment and other characteristics. The best models selected from 10-GIQs and 6-GIQs prediction models performed the prediction work with the prediction quality of PRED (0.15) >= 99% and PRED (0.15) >= 98%, separately. An interesting observation discovered in this work is that the training satisfaction could be predicted based on trainees information that was not related to any training experience at all. The dominant factors on training satisfaction were the trainee’s attitude in training class and the trainee’s desire to take the training which was found in 10-GIQs prediction models and 6-GIQs prediction models, separately.