Browsing by Subject "simulation"
Now showing 1 - 20 of 40
Results Per Page
Sort Options
Item A comparison of selection and breeding strategies for incorporating wood properties into a loblolly pine (Pinus taeda L.) elite population breeding program(Texas A&M University, 2004-09-30) Myszewski, Jennifer HelenThe heritability of microfibril angle (MFA) in loblolly pine, Pinus taeda L., and its genetic relationships with height, diameter, volume and specific gravity were examined in two progeny tests with known pedigrees. Significant general combining ability (GCA), specific combining ability (SCA), and SCA x block effects indicated that there are both additive and non-additive genetic influences on MFA. Individual-tree narrow-sense heritability estimates were variable, ranging from 0.17 for earlywood (ring) 4 MFA to 0.51 for earlywood (ring) 20 MFA. Genetic correlations between MFA, specific gravity and the growth traits were non-significant due to large estimated standard errors. Multiple-trait selection and breeding in a mainline and elite population tree improvement program were simulated using Excel and Simetar (Richardson 2001). The effects of four selection indices were examined in the mainline population and the effects of seven selection indices and four breeding strategies were examined in the elite population. In the mainline population, selection for increased growth caused decreased wood quality over time. However, it was possible to maintain the overall population mean MFA and mean specific gravity at levels present in the base population by implementing restricted selection indices. Likewise, selection for improved wood quality in the elite population resulted in decreased growth unless restricted selection indices or pulp indices derived from those of Lowe et al. (1999) were used. Correlated phenotypic responses to selection on indices using economic weights and heritabilities were dependent on breeding strategy. When a circular mating system (with parents randomly assigned to controlled-crosses) was used, the index trait with a higher economic weight was more influential in determining correlated responses in non-index traits than the index trait with a lower economic weight. However, when positive assortative mating was used, the index trait with a greater variance was more influential in determining correlated responses in non-index traits than the index trait with a lower variance regardless of economic weight.Item A Monte Carlo investigation of robustness to nonnormal incomplete data of multilevel modeling(Texas A&M University, 2006-10-30) Zhang, DuanDue to its increasing popularity, hierarchical linear modeling (HLM) has been used along with structural equation modeling (SEM) to analyze data with nested structure. In spite of the extensive research on commonly encountered problems such as violation of normality and missing data treatment within the framework of SEM, these areas have been much less explored in HLM. The present study compared HLM and multilevel SEM through a Monte Carlo study from the perspectives of the influence of nonnormality and performance of multiple imputation based on the expectationmaximization (EM) algorithm under various combinations of sample sizes at two levels. The statistical power, parameter estimates, standard errors, and estimation bias for the main effects and cross-level interaction in a two- level model were compared across the four design factors: analysis method, normality condition, missing data proportion, and sample size. HLM and multilevel SEM appeared to have similar power detecting the main effect, while HLM had better power for the cross- level interaction. Neither seemed to be sensitive to violation of the normality assumption. A higher proportion of missing data resulted in larger standard errors and estimation bias. Sample sizes at both the individual and cluster levels played a role in the statistical power for parameter estimates. The two-way interactions for the four factors were generally nonzero. Overall, both HLM and multilevel SEM were quite robust to violation of normality. SEM appears more useful in more complex path models while HLM is superior in detecting main effects. Multiple imputation based on the EM algorithm performed well in producing stable parameter estimates for up to 30% missing data. Sample size design should take into account the level at which the research is most focused.Item An empirical simulation analysis on cotton marketing strategies in west Texas(2009-05-15) Elrod, Christopher PatrickThe three marketing strategies, buying a put option, cash sale at harvest, and cash sale in June after December harvest, are simulated for six representative irrigated and dryland cotton farms in West Texas. Each marketing strategy is ranked using the net cash income probability distribution for the representative farms using stochastic efficiency with respect to a function (SERF). SERF rankings were consistent across dryland and irrigated farms. The buying of a put option was found to be the marketing strategy that produced the highest certainty equivalent (CE) for normal risk averse decision makers. Cash sale at harvest followed by cash sale in June marketing strategies were ranked second and third, respectively. A sensitivity analysis increased the national baseline price used in the model by 45 percent. Cash sale at harvest then consistently became the highest ranked marketing strategy followed by buying a put option and then cash sale in June. The research found that if a strike price and premium that covered the production costs of the representative farm was available during the pre-harvest period, the decision maker may have the ability to increase utility by hedging with the put option.Item Behavioral ecology and conservation of large mammals: historical distribution, reintroduction and the effects of fragmented habitat(2009-05-15) Gilad, OranitConservation biologists have used reintroduction as a method to reestablish extirpated species in their native habitat. Three important aspects of a successful reintroduction effort include: (1) a habitat suitability study of the reintroduction area, including effects of migration corridors; (2) identification of possible predators of the reintroduced species; and (3) a post-reintroduction assessment including an evaluation of the species' population dynamics. In this study I examine the suitability of Guadalupe Mountains National Park (GUMO) as a reintroduction area for desert bighorn sheep. The study used landscape metrics to compare GUMO to a nearby mountain range that is currently supporting an estimated population of 400 bighorn sheep. This study identified migration corridors for bighorns throughout the region and evaluated mountain lion (a potential predator of bighorn sheep) numbers either residing in or passing through the park between the years 1997 to 2004. Results on the studies in GUMO revealed 15,884 ha of suitable habitat for bighorn sheep and provided evidence of migration routes between GUMO and neighboring mountain ranges. In terms of potential predators, a minimum of 32 resident and/or transient mountain lions occurred in GUMO over a seven year period, and a minimum of 15 cats used the park in 2002. Based on estimates of individual home range of males and females, GUMO should be able to support four to five individuals. The genetic data indicates a high number of transients or perhaps an unstable population of mountain lions that may be the result of intense hunting pressure of cats in Texas. Finally, my study simulates parameters of the population dynamics of a different species, the Arabian oryx that was reintroduced as three separate populations to the Israeli Negev between 1998 and 2005. I simulated population growth and the effect of migration corridors on species persistence. Results suggest that migration corridors are essential for a self-sustaining viable metapopulation under current natality rates. In the event that natality rates increase (as was evident in a reintroduced population of Arabian oryx in Oman), metapopulation can reach viable size with only two of the release sites (open, flat terrain) connected by migration corridors.Item Bouquet: a Satellite Constellation Visualization Program for Walkers and Lattice Flower Constellations(2011-10-21) Enkh, MandakhThe development of the Flower Constellation theory offers an expanded framework to utilize constellations of satellites for tangible interests. To realize the full potential of this theory, the beta version of Bouquet was developed as a practical computer application that visualizes and edits Flower Constellations in a user-friendly manner. Programmed using C++ and OpenGL within the Qt software development environment for use on Windows systems, this initial version of Bouquet is capable of visualizing numerous user defined satellites in both 3D and 2D, and plot trajectories corresponding to arbitrary coordinate frames. The ultimate goal of Bouquet is to provide a viable open source alternative to commercial satellite orbit analysis programs. As such, the coding of Bouquet puts heavy emphasis on flexibility, upgradability and methods to provide continued support through open source collaboration.Item Daily Time Step Simulation with a Priority Order Based Surface Water Allocation Model(2011-02-22) Hoffpauir, Richard JamesSurface water availability models often use monthly simulation time steps for reasons of data availability, model parameter parsimony, and reduced computational time. Representing realistic streamflow variability, however, requires modeling time steps with sub-monthly or daily temporal resolution. Adding daily time step simulation capability to the Water Rights Analysis Package (WRAP) and the Texas Water Availability Modeling (WAM) System is a growing area of need and interest in water rights permitting, water supply planning, and environmental protection. This research consisted of the following tasks: 1. Key modeling issues are identified that are relevant to daily time step modeling, but are otherwise not considered with monthly simulations. These key modeling issues include disaggregating monthly naturalized flows into daily flows, routing changes to flow through the stream network, reducing impacts to water availability in a priority order based water right system through the use of streamflow forecasting, distributing water right targets from monthly to daily amounts, and integrating flood control reservoir operations into the existing conservation reservoir modeling framework. 2. Two new programs for WRAP are developed to address the key daily time step modeling issues. The new programs include a pre-processor program, DAY, and a daily simulation program, SIMD. 3. A case study of the Brazos River Basin WAM is presented using daily time steps with SIMD. The purpose of the case study is to present an implementation of the daily modeling capabilities. 4. The case study simulation results are used as a basis to draw conclusions regarding monthly versus daily simulation outcomes. The research, as presented through the Brazos River Basin WAM case study, illustrated that incorporating realistic daily streamflow variability into the simulation of a priority order based water allocation system can substantially affect the results obtained for time series of critical period reservoir storage contents, the determination of long-term water right reliability, and the distribution of unappropriated and regulated flows. The modeling capabilities developed by this research advance the state of water availability modeling with sub-monthly time steps by addressing the key modeling issues related to streamflow variability and routing.Item Development of a Compositional Reservoir Simulator for Asphaltene Precipitation Based on a Thermodynamically Consistent Model(2013-07-23) Gonzalez Abad, Karin GA rigorous three-phase asphaltene precipitation model was implemented into a compositional reservoir simulator to represent and estimate the reduction of porosity and permeability responsible for productivity impairment. Previous modeling techniques were computationally inefficient, showed thermodynamic inconsistencies, or required special laboratory experiments to characterize the fluid. The approach developed in this study uses a cubic equation of state to solve for vapor/liquid/liquid equilibrium (VLLE), where asphaltene is the denser liquid phase. Precipitation from the liquid mixture occurs as its solubility is reduced either by changes in pressure (natural depletion), or composition (i.e. mixing resulting from gas injection). The dynamic relationship between phase composition, pressure, and porosity/permeability is modeled with a finite differences reservoir simulator and solved using an implicit-pressure, explicit-saturations and explicit-compositions (IMPESC) direct sequential method. The robustness of this model is validated by the ability to reproduce experimental asphaltene precipitation data while predicting the expected phase behavior envelope and response to key thermodynamic variables (i.e. type of components and composition, pressure and, temperature). The three-phase VLLE flash provides superior thermodynamic predictions compared to existing commercial techniques. Computer performance analysis showed that the model has a comparable cost to existing asphaltene precipitation models, taking only 1.1 more time to calculate but requiring fewer tunable parameters. The VLLE flash was in average 4.47 times slower compared to a conventional two-phase vapor/liquid flash. This model has the speed of a flash calculation while maintaining thermodynamic consistency, enabling efficient optimization of reservoir development strategies to mitigate the detrimental effects of asphaltene precipitation on productivity.Item Development of dynamic models of reactive distillation columns for simulation and determination of control(Texas A&M University, 2005-02-17) Chakrabarty, ArnabDynamic models of a reactive distillation column have been developed and implemented in this work. A model describing the steady state behavior of the system has been built in a first step. The results from this steady state model have been compared to data provided from an industrial collaborator and the reconciled model formed the basis for the development of a dynamic model. Four controlled and four manipulated variables have been determined in a subsequent step and step tests for the manipulated variables were simulated. The data generated by the step responses was used for fitting transfer functions between the manipulated and the controlled variables. RGA analysis was performed to find the optimal pairing for controller design. Feedback controllers of PID type were designed between the paired variables found from RGA and the controllers were implemented on the column model. Both servo and regulatory problems have been considered and tested.Item Dynamic Control of Serial-batch Processing Systems(2010-01-14) Cerekci, AbdullahThis research explores how near-future information can be used to strategically control a batch processor in a serial-batch processor system setting. Specifically, improved control is attempted by using the upstream serial processor to provide near-future arrival information to the batch processor and further meet the re-sequencing requests to shorten critical products? arrival times to the batch processor. The objective of the research is to reduce mean cycle time and mean tardiness of the products being processed by the serial-batch processor system. This research first examines how mean cycle time performance of the batch processor can be improved by an upstream re-sequencing approach. A control strategy is developed by combining a look-ahead control approach with an upstream re-sequencing approach and is then compared with benchmark strategies through simulation. The experimental results indicate that the new control strategy effectively improves mean cycle time performance of the serial-batch processor system, especially when the number of product types is large and batch processor traffic intensity is low or medium. These conditions are often observed in typical semiconductor manufacturing environments. Next, the use of near-future information and an upstream re-sequencing approach is investigated for improving the mean tardiness performance of the serial-batch processor system. Two control strategies are devised and compared with the benchmark strategies through simulation. The experimental results show that the proposed control strategies improve the mean tardiness performance of the serial-batch processor system. Finally, the look-ahead control approaches that focus on mean cycle time and mean tardiness performances of the serial-batch processor system are embedded under a new control strategy that focuses on both performance measures simultaneously. It is demonstrated that look-ahead batching can be effectively used as a tool for controlling batch processors when multiple performance measures exist.Item Economic Analysis of Atoxigenic Mitigation Methods for Aflatoxin in Corn in Central Texas(2014-04-09) Sampson, Jessica SueAtoxigenics and crop insurance are available to producers to assist in preventing economic loss from aflatoxin contamination in corn. Atoxigenics are a newer technology available to farmers, and although professional opinion of this biotechnology encourages its use, an economic analysis has not been performed to determine if the atoxigenics are overall economically beneficial to the producer when combined with crop insurance. The objective of this paper is to perform an economic analysis on the decision to use available atoxigenic treatments on a corn crop, and evaluate the economic outcome at different crop insurance levels for corn producers in Central Texas. This paper will use a risk based partial budget simulation model combined with an aflatoxin contamination simulation model to complete a risk analysis on the decision to use atoxigenic mitigation methods. Field level data on aflatoxin contamination levels is from Bell County, Texas. A representative farm was simulated with and without atoxigenic treatments and each case was simulated across a range of crop insurance options available to corn producers in Bell County. A total of 50 scenarios were simulated and compared based on net revenue. Results show atoxigenics do provide a monetary benefit to producers. When the atoxigenic treatment was compared to no atoxigenic treatment, both with no insurance, the simulated average net revenue was higher by $8-$10 per acre for the treatment scenario. When crop insurance was simulated, with and without atoxigenic treatments, results indicated the current RMA insurance premiums were too high for treatment scenarios. The current RMA premiums did not account for the decreased risk of insurance payout amount and frequency associated with the use of atoxigenics. Current RMA premiums were replaced with fair premiums equal to the simulated mean indemnity payment for all crop insurance options. When the treatment scenario was compared to the no treatment scenario, under the set of most efficient crop insurance options, atoxigenic treatment provided the producer with an additional net monetary benefit of $8-$16 per acre.Item Economic implications of anaerobic digesters on dairy farms in Texas(Texas A&M University, 2007-09-17) Jackson, Randy Scott, Jr.Historically, air and water have been considered common property resources and, therefore, over utilized as waste receptors. Dairy waste is a leading environmental concern in the North Bosque River watershed in Texas. Changing societal attitudes are forcing dairies and policymakers to balance environmental concerns with farm profitability. Dairies are entering a realm filled with technologies to combat waste concerns. Anaerobic digester technology may play a role in helping dairies balance profit and the environment. Digesters capture methane from livestock waste and transform it into electricity which can be sold to utilities or used on-farm. Because a digester facility is confined, air and water pollution can be reduced. Technological advancement and institutional factor changes allowing the sale of on-farm produced electricity and green power requirements have increased the economic feasibility of digesters. The study of the economic implications of anaerobic digesters for Texas dairies provides producers and policymakers with information to make good decisions concerning adoption and subsidization of this technology. At the beginning of this study, no digesters were operating in Texas. Dairies operating digesters in four states, therefore, were interviewed on-site to provide necessary data. The expected net present value, E(NPV), of a plug-flow digester is negative with and without selling electricity, indicating it should not be constructed based strictly on its financial contribution. At the current electricity-selling price, digesters are less economically feasible than current waste management strategies, lagoons, even after considering potential environmental penalties. However, selling electricity and capturing by-product heat for cost savings makes the digester's E(NPV) less negative than lagoons. The E(NPV) of a covered lagoon digester is positive. This indicates digesters are a potentially feasible waste management strategy. For plug-flow digesters to show a positive E(NPV), the selling price needs to be approximately 82.38% higher than the current price. The breakeven selling price is 12% higher than the current price. Below the breakeven price, lagoons have a larger E(NPV) than plug-flow digesters, therefore making lagoons the preferred waste management strategy. Results suggest changes in rules and technology efficiency make digesters economically competitive with current waste management systems.Item Examining the Economic Implications and Considerations for Continued Involvement in the Conservation Reserve Program in Texas(2012-10-19) Schuchard, Laura MaeThe Conservation Reserve Program (CRP) has become increasingly important in Texas due to the high level of program participation, particularly in the high plains of Texas. There is also a seemingly large amount of CRP contracts that will expire, particularly in the next five years. As these contracts expire, it becomes very important for landowners to evaluate fully the options that are available for future land use. This research focused primarily on the ten counties in Texas having the most acres of CRP enrollment, which include Gaines, Deaf Smith, Lamb, Hale, Floyd, Dallam, Hockley, Terry, Castro, and Swisher Counties. The primary objective was to provide landowners in these counties with a comprehensive list of options available after CRP contract expiration. The options were identified as re-enrollment in CRP, conversion back into crop production, lease land to a tenant as rangeland, or lease land to a tenant as cropland. Latin Hypercube simulation was used to generate a stochastic value for probable net returns per acre for the four options. The four options were then evaluated based on a variety of methods typically used to rank risky alternatives. The results indicate that CRP enrollment is the most preferred option for landowners. Dryland crop production, while it can return very high net returns per acre, also has the highest amount of risk involved. However, it is important to note that the best ranking method and decision are dependent on the specific decision maker and situation. The second objective of the research was to determine if there are measurable economic impacts to the agricultural services industry associated with CRP enrollment. OLS regression models were only run for five of the ten counties in the study area due to a lack of data reported by the Bureau of Economic Analysis. Of the five counties modeled, the Gaines, Dallam, and Hale County models indicated that CRP has played a significant role in the annual earnings of the agricultural services industry. The results suggest that there would be a benefit in conducting further research to examine the relationship between CRP enrollment and the agricultural services sector.Item Financial Implications of Intergenerational Farm Transfers(2013-11-25) Peterson, Devin RichardThis study seeks to address the challenge of family farm succession. A recursive, stochastic, simulation model is employed to estimate the financial impacts and accompanying risk incurred through the intergenerational transfer of farm assets and management. The model assists in creating a before and after comparative analysis of succession for a large, medium, and small sized representative farm in Texas. Eight methods of farm transfer are analyzed: a will, trust, buy-sell and lease-to-buy agreements, the formation of business entities, life insurance, gifting, and selling farmland to outside investors. These methods are employed to help minimize estate taxes, create retirement income for the owner, or decrease general transfer costs such as probate fees. The simulation model utilizes stochastic and control variables to create pro -forma financial statements that aid in determining net income, debt requirements, and debt outstanding each year for a ten year time period. Key output variables such as combined net present value (NPV) of the owner and successor and the debt to asset ratio are used to analyze financial performance and position. Combined NPV is also employed to rank risky alternatives from most to least preferred using the method of stochastic efficiency with respect to a function. Output variables of estate and gift taxes and debt capital volume are also examined to compare across methods of transfer and to view their effects upon NPV, debt levels, and cash flows. The study finds that the most preferred method varies by farm size, net worth, and the underlying goals of the farmer.Item GPU programming for real-time watercolor simulation(Texas A&M University, 2005-02-17) Scott, Jessica StacyThis thesis presents a method for combining GPU programming with traditional programming to create a fluid simulation based watercolor tool for artists. This application provides a graphical interface and a canvas upon which artists can create simulated watercolors in real time. The GPU, or Graphics Processing Unit, is an effcient and highly parallel processor located on the graphics card of a computer; GPU programming is touted as a way to improve performance in graphics and non?graphics applications. The effectiveness of this method in speeding up large, general purpose programs, however, is found here to be disappointing. In a small application with minimal CPU/GPU interaction, theoretical speedups of 10 times maybe achieved, but with the limitations of communication speed between the GPU and the CPU, gains are slight when this method is used in conjunction with traditional programming.Item Implementing Feedback Control on a Novel Proximity Operations Simulation Platform(2012-07-16) Aures-Cavalieri, Kurt DaleRecently, The Land, Air and Space Robotics (LASR) Laboratory has demonstrated a state-of-the-art proximity operations test bed that will revolutionize the concept of portable space systems simulation. The Holonomic Omni-directional Motion Emulation Robot (HOMER) permits in nite, un-tethered circumnavigations of one object by another. To allow this platform to operate at the desired performance, an appropriate implementation of feedback control is essential. The dynamic model is derived and presented using a Lagrangian approach. A Lyapunov method is used to form proportional-derivative (PD) and proportional-integral-derivative (PID) feedback controllers. These controllers are validated with computer-based simulation and compared through experimental results. Finally, a frequency analysis is performed in an effort to identify the bandwidth of the system and provide a better understanding of the expected system performance for reference motions containing harmonic perturbations.Item Integrated Simulation and Optimization for Decision-Making under Uncertainty with Application to Healthcare(2014-11-26) Alvarado, MichelleMany real applications require decision-making under uncertainty. These decisions occur at discrete points in time, influence future decisions, and have uncertainties that evolve over time. Mean-risk stochastic integer programming (SIP) is one optimization tool for decision problems involving uncertainty. However, it may be challenging to develop a closed-form objective for some problems. Consequently, simulation of the system performance under a combination of conditions becomes necessary. Discrete event system specification (DEVS) is a useful tool for simulation and evaluation, but simulation models do not naturally include a decision-making component. This dissertation develops a novel approach whereby simulation and optimization models interact and exchange information leading to solutions that adapt to changes in system data. The integrated simulation and optimization approach was applied to the scheduling of chemotherapy appointments in an outpatient oncology clinic. First, a simulation of oncology clinic operations, DEVS-CHEMO, was developed to evaluate system performance from the patient and managements perspectives. Four scheduling algorithms were developed for DEVS-CHEMO. Computational results showed that assigning patients to both chairs and nurses improved system performance by reducing appointment duration by 3%, reducing waiting time by 34%, and reducing nurse overtime by 4%. Second, a set of mean-risk SIP models, SIP-CHEMO, was developed to determine the start date and resource assignments for each new patients appointment schedule. SIP-CHEMO considers uncertainty in appointment duration, acuity levels, and resource availability. The SIP-CHEMO models utilize the expected excess and absolute semideviation mean-risk measures. The SIP-CHEMO models increased throughput by 1%, decreased waiting time by 41%, and decreased nurse overtime by 25% when compared to DEVS-CHEMOs scheduling algorithms. Finally, a new framework integrating DEVS and SIP, DEVS-SIP, was developed. The DEVS-CHEMO and SIP-CHEMO models were combined using the DEVS-SIP framework to create DEVS-SIP-CHEMO. Appointment schedules were determined using SIP-CHEMO and implemented in DEVS-CHEMO. If the system performance failed to meet predetermined stopping criteria, DEVS-CHEMO revised SIP-CHEMO and determined a new appointment schedule. Computational results showed that DEVS-SIP-CHEMO is preferred to using simulation or optimization alone. DEVSSIP-CHEMO held throughput within 1% and improved nurse overtime by 90% and waiting time by 36% when compared to SIP-CHEMO alone.Item Integration of well test analysis into naturally fractured reservoir simulation(Texas A&M University, 2006-04-12) Perez Garcia, Laura ElenaNaturally fractured reservoirs (NFR) represent an important percentage of the worldwide hydrocarbon reserves and production. Reservoir simulation is a fundamental technique in characterizing this type of reservoir. Fracture properties are often not available due to difficulty to characterize the fracture system. On the other hand, well test analysis is a well known and widely applied reservoir characterization technique. Well testing in NFR provides two characteristic parameters, storativity ratio and interporosity flow coefficient. The storativity ratio is related to fracture porosity. The interporosity flow coefficient can be linked to shape factor, which is a function of fracture spacing. The purpose of this work is to investigate the feasibility of estimating fracture porosity and fracture spacing from single well test analysis and to evaluate the use of these two parameters in dual porosity simulation models. The following assumptions were considered for this research: 1) fracture compressibility is equal to matrix compressibility; 2) no wellbore storage and skin effects are present; 3) pressure response is in pseudo-steady state; and 4) there is single phase flow. Various simulation models were run and build up pressure data from a producer well was extracted. Well test analysis was performed and the result was compared to the simulation input data. The results indicate that the storativity ratio provides a good estimation of the magnitude of fracture porosity. The interporosity flow coefficient also provides a reasonable estimate of the magnitude of the shape factor, assuming that matrix permeability is a known parameter. In addition, pressure tests must exhibit all three flow regimes that characterizes pressure response in NFR in order to obtain reliable estimations of fracture porosity and shape factor.Item Methodology for designing the fuzzy resolver for a radial distribution system fault locator(Texas A&M University, 2006-04-12) Li, JunThe Power System Automation Lab at Texas A&M University developed a fault location scheme that can be used for radial distribution systems. When a fault occurs, the scheme executes three stages. In the first stage, all data measurements and system information is gathered and processed into suitable formats. In the second stage, three fault location methods are used to assign possibility values to each line section of a feeder. In the last stage, a fuzzy resolver is used to aggregate the outputs of the three fault location methods and assign a final possibility value to each line section of a feeder. By aggregating the outputs of the three fault location methods, the fuzzy resolver aims to obtain a smaller subset of line sections as potential faulted sections than the individual fault location methods. Fuzzy aggregation operators are used to implement fuzzy resolvers. This dissertation reports on a methodology that was developed utilizing fuzzy aggregation operators in the fuzzy resolver. Three fuzzy aggregation operators, the min, OWA, and uninorm, and two objective functions were used to design the fuzzy resolver. The methodologies to design fuzzy resolvers with respect to a single objective function and with respect to two objective functions were presented. A detailed illustration of the design process was presented. Performance studies of designed fuzzy resolvers were also performed. In order to design and validate the fuzzy resolver methodology, data were needed. Due to the lack of real field data, simulating a distribution feeder was a feasible alternative to generate data. The IEEE 34 node test feeder was modeled. Time current characteristics (TCC) based protective devices were added to this feeder. Faults were simulated on this feeder to generate data. Based on the performance studies of designed fuzzy resolvers, the fuzzy resolver designed using the uninorm operator without weights is the first choice. For this fuzzy resolver, no optimal weights are needed. In addition, fuzzy resolvers using the min operator and OWA operator can be used to design fuzzy resolvers. For these two operators, the methodology for designing fuzzy resolvers with respect to two objective functions was the appropriate choice.Item Modeling the elastic and plastic response of single crystals and polycrystalline aggregates(Texas A&M University, 2005-02-17) Patwardhan, Parag VilasUnderstanding the elastic-plastic response of polycrystalline materials is an extremely difficult task. A polycrystalline material consists of a large number of crystals having different orientations. On its own, each crystal would deform in a specific manner. However, when it is part of a polycrystalline aggregate, the crystal has to ensure compatibility with the aggregate, which causes the response of the crystal to change. Knowing the response of a crystal enables us to view the change in orientation of the crystal when subjected to external macroscopic forces. This ability is useful in predicting the evolution of texture in a material. In addition, by predicting the response of a crystal that is part of a polycrystalline aggregate, we are able to determine the free energy of each crystal. This is useful in studying phenomena like grain growth and diffusion of atoms across high energy grain boundaries. This dissertation starts out by presenting an overview of the elastic and plastic response of single crystals. An attempt is made to incorporate a hardening law which can describe the hardening of slip systems for all FCC materials. The most commonly used theories for relating the response of single crystals to that of polycrystalline aggregates are the Taylor model and the Sachs model. A new theory is presented which attempts to encompass the Taylor as well as the Sachs Model for polycrystalline materials. All of the above features are incorporated into the software program "Crystals".Item Numerically Efficient Water Quality Modeling and Security Applications(2013-02-04) Mann, AngelicaChemical and biological contaminants can enter a drinking water distribution system through one of the many access points to the network and can spread quickly affecting a very large area. This is of great concern, and water utilities need to consider effective tools and mitigation strategies to improve water network security. This work presents two components that have been integrated into EPA?s Water Security Toolkit, an open-source software package that includes a set of tools to help water utilities protect the public against potential contamination events. The first component is a novel water quality modeling framework referred to as Merlion. The linear system describing contaminant spread through the network at the core of Merlion provides several advantages and potential uses that are aligned with current emerging water security applications. This computational framework is able to efficiently generate an explicit mathematical model that can be easily embedded into larger mathematical system. Merlion can also be used to efficiently simulate a large number of scenarios speeding up current water security tools by an order of magnitude. The last component is a pair of mixed-integer linear programming (MILP) formulations for efficient source inversion and optimal sampling. The contaminant source inversion problem involves determining the source of contamination given a small set of measurements. The source inversion formulation is able to handle discrete positive/negative measurements from manual grab samples taken at different sampling cycles. In addition, sensor/sample placement formulations are extended to determine the optimal locations for the next manual sampling cycle. This approach is enabled by a strategy that significantly reduces the size of the Merlion water quality model, giving rise to a much smaller MILP that is solvable in a real-time setting. The approach is demonstrated on a large-scale water network model with over 12,000 nodes while considering over 100 timesteps. The results show the approach is successful in finding the source of contamination remarkably quickly, requiring a small number of sampling cycles and a small number of sampling teams. These tools are being integrated and tested with a real-time response system.