Browsing by Subject "Analysis"
Now showing 1 - 20 of 23
Results Per Page
Sort Options
Item An examination and analysis of the metal artifacts from Presidio San Saba (41mn1), Menard County, Texas(2007-08) Norment, Aaron R; Walter, Tamra L.; Houk, Brett A.The Spanish Colonial site of Presidio San Sabá, occupied from 1757 to the 1770’s, is the largest Spanish fort in Texas. Since 2000, Texas Tech University has conducted archaeological investigations at the site. As a result, large quantities of metal artifacts have been recovered. Given the peripheral location of the fort along the frontier of New Spain, it is expected that the archaeological record will reflect the isolated situation experienced by the garrison of San Sabá. This thesis is an attempt to describe and explain the collection of metal artifacts recovered from the site within its context as a remote, frontier outpost.Item Analysis of electrical signatures in synchronous generators characterized by bearing faults(2009-05-15) Choi, Jae-WonSynchronous generators play a vital role in power systems. One of the major mechanical faults in synchronous generators is related to bearings. The popular vibration analysis method has been utilized to detect bearing faults for years. However, bearing health monitoring based on vibration analysis is expensive. One of the reasons is because vibration analysis requires costly vibration sensors and the extra costs associated with its proper installation and maintenance. This limitation prevents continuous bearing condition monitoring, which gives better performance for rolling element bearing fault detection, compared to the periodic monitoring method that is a typical practice for bearing maintenance in industry. Therefore, a cost effective alternative is necessary. In this study, a sensorless bearing fault detection method for synchronous generators is proposed based on the analysis of electrical signatures, and its bearing fault detection capability is demonstrated. Experiments with staged bearing faults are conducted to validate the effectiveness of the proposed fault detection method. First, a generator test bed with an in- situ bearing damage device is designed and built. Next, multiple bearing damage experiments are carried out in two vastly different operating conditions in order to obtain statistically significant results. During each experiment, artificially induced bearing current causes accelerated damage to the front bearing of the generator. This in-situ bearing damage process entirely eliminates the necessity of disassembly and reassembly of the experimental setup that causes armature spectral distortions. The electrical fault indicator is computed based on stator voltage signatures without the knowledge of machine and bearing specific parameters. Experimental results are compared using the electrical indicator and a vibration indicator that is calculated based on measured vibration data. The results indicate that the electrical indicator can be used to analyze health degradation of rolling element bearings in synchronous generators in most instances. Though the vibration indicator enables early bearing fault detection, it is found that the electrical fault indicator is also capable of detecting bearing faults well before catastrophic bearing failure.Item Assessment of Eagle Ford Shale Oil and Gas Resources(2013-07-30) Gong, XinglaiThe Eagle Ford play in south Texas is currently one of the hottest plays in the United States. In 2012, the average Eagle Ford rig count (269 rigs) was 15% of the total US rig count. Assessment of the oil and gas resources and their associated uncertainties in the early stages is critical for optimal development. The objectives of my research were to develop a probabilistic methodology that can reliably quantify the reserves and resources uncertainties in unconventional oil and gas plays, and to assess Eagle Ford shale oil and gas reserves, contingent resources, and prospective resources. I first developed a Bayesian methodology to generate probabilistic decline curves using Markov Chain Monte Carlo (MCMC) that can quantify the reserves and resources uncertainties in unconventional oil and gas plays. I then divided the Eagle Ford play from the Sligo Shelf Margin to the San Macros Arch into 8 different production regions based on fluid type, performance and geology. I used a combination of the Duong model switching to the Arps model with b = 0.3 at the minimum decline rate to model the linear flow to boundary-dominated flow behavior often observed in shale plays. Cumulative production after 20 years predicted from Monte Carlo simulation combined with reservoir simulation was used as prior information in the Bayesian decline-curve methodology. Probabilistic type decline curves for oil and gas were then generated for all production regions. The wells were aggregated probabilistically within each production region and arithmetically between production regions. The total oil reserves and resources range from a P_(90) of 5.3 to P_(10) of 28.7 billion barrels of oil (BBO), with a P_(50) of 11.7 BBO; the total gas reserves and resources range from a P_(90) of 53.4 to P_(10) of 313.5 trillion cubic feet (TCF), with a P_(50) of 121.7 TCF. These reserves and resources estimates are much higher than the U.S. Energy Information Administration?s 2011 recoverable resource estimates of 3.35 BBO and 21 TCF. The results of this study provide a critical update on the reserves and resources estimates and their associated uncertainties for the Eagle Ford shale formation of South Texas.Item Automated analysis of linear array images for the detection of human papillomavirus genotypes(2011-05) Wilhelm, Matthew S.; Nutter, Brian; Mitra, SunandaPersistent infections with carcinogenic Human Papillomavirus (HPV) are a necessary cause for cervical cancer, which is the fifth most deadly cancer for women worldwide. Approximately 20 million Americans are currently infected with HPV, but only a subset will develop cervical cancer. While a negative HPV test indicates a very low risk for cervical cancer, a positive test cannot discriminate between an innocuous transient infection and a prevalent cancer. Additional information such as HPV genotype and HPV viral load is thought to improve the ability to predict which women will develop cervical cancer. The visual interpretation of hybridization-strip-based HPV genotyping results, however, is heterogeneous and poorly standardized. The need for accurate and repeatable results has led to work toward the development of a robust automated image analysis package for HPV genotyping strips.Item Concerto for orchestra(2011-05) Passos, Luís Otávio Teixeira; Grantham, Donald, 1947-; Pinkston, Russell; Pennycook, Bruce; Drott, Eric; Lara, FernandoConcerto for orchestra is a twenty-minute work for large orchestra. It was conceived from my personal interest in creating a musical narrative that could create different moods, colors, contrast, agreement, tension, and resolution. I had a major influence from Ligeti’s Double Concerto regarding pitch, mood and form organization. I used his technique of interval signal to differentiate different sections of a movement as well as chromatic balance─the alternation of diatonic scales related chromatically. I also had influences from Mahler, Debussy, Nancarrow, and from my own work. The narrative of my Concerto is based on Ligeti’s notion of states, events and transformations. My Concerto presents states that are transformed into new states. The piece is divided in four movements: Lights, Convergences, Lights II, Convergences II. The Lights movements favor delicate texture, based on a major melodic line and a subtle accompaniment. They also give prominence to solo sections. Convergences favors the idea of dialogue, multitudinousness, contrast, and dense textures. Convergences II emphasizes the tutti versus solo and ritornello form from Baroque concertos.Item Defect analysis using resonant ultrasound spectroscopy(2009-05-15) Flynn, Kevin JosephThis thesis demonstrates the practicability of using Resonant Ultrasound Spectroscopy (RUS) in combination with Finite Element Analysis (FEA) to determine the size and location of a defect in a material of known geometry and physical constants. Defects were analyzed by comparing the actual change in frequency spectrum measured by RUS to the change in frequency spectrum calculated using FEA. FEA provides a means of determining acceptance/rejection criteria for Non-Destructive Testing (NDT). If FEA models of the object are analyzed with defects in probable locations; the resulting resonant frequency spectra will match the frequency spectra of actual objects with similar defects. By analyzing many FEA-generated frequency spectra, it is possible to identify patterns in behavior of the resonant frequencies of particular modes based on the nature of the defect (location, size, depth, etc.). Therefore, based on the analysis of sufficient FEA models, it should be possible to determine nature of defects in a particular object from the measured resonant frequency. Experiments were conducted on various materials and geometries comparing resonant frequency spectra measured using RUS to frequency spectra calculated using FEA. Measured frequency spectra matched calculated frequency spectra for steel specimens both before and after introduction of a thin cut. Location and depth of the cut were successfully identified based on comparison of measured to calculated resonant frequencies. However, analysis of steel specimens with thin cracks, and of ceramic specimens with thin cracks, showed significant divergence between measured and calculated frequency spectra. Therefore, it was not possible to predict crack depth or location for these specimens. This thesis demonstrates that RUS in combination with FEA can be used as an NDT method for detection and analysis of cracks in various materials, and for various geometries, but with some limitations. Experimental results verify that cracks can be detected, and their depth and location determined with reasonable accuracy. However, experimental results also indicate that there are limits to the applicability of such a method, the primary one being a lower limit to the size of crack ? especially thickness of the crack - for which this method can be applied.Item Effects of EGR, water/N2/CO2 injection and oxygen enrichment on the availability destroyed due to combustion for a range of conditions and fuels(2009-06-02) Sivadas, Hari ShankerThis study was directed at examining the effects of exhaust gas recirculation (EGR), water/N2/CO2 injections and oxygen enrichment on availability destroyed because of combustion in simple systems like those of constant pressure and constant volume. Higher cooled EGR fractions lead to higher availability destruction for reactant temperatures less than 2000 K. The availability destroyed for 40% EGR at 300 K for constant pressure and constant volume combustion was 36% and 33%, respectively. Neglecting the chemical availability in the products, the equivalence ratio and reactant temperature that corresponded to the lowest availability destruction varied from 0.8 to 1.0 and 800 K to 1300 K, respectively, depending on the EGR fraction. The fraction of the reactant availability destroyed increased with the complexity of the fuel. The trends stayed the same for the different EGR fractions for the eight fuels that were analyzed. Higher injected water fractions lead to higher availability destruction for reactant temperatures less than 1000 K. The availability destroyed for a 40% injected water fraction at 300 K for constant pressure combustion was 36%. The product temperature ranged from 2300 K to 450 K at a reactant temperature of 300 K for injected fractions from 0% to 90%. For a 40% injected fraction at a reactant temperature of 300 K, water injection and cooled EGR resulted in the greatest destruction of availability (about 36%) with CO2 injection leading to the least destruction (about 32%). Constant volume combustion destroyed less availability compared to constant pressure combustion at a reactant pressure of 50 kPa. At a higher reactant pressure of 5000 kPa, constant pressure combustion destroyed less availability compared to constant volume combustion for reactant temperatures past 1000 K. Higher fractions of oxygen in the inlet lead to higher product temperatures that lead to lower availability destruction. For 40% oxygen in inlet, the product temperature increased to 2900 K and the availability destroyed dropped to 25% at a reactant temperature of 300 K for constant pressure combustion.Item Effects of prestress on strains and deflections in pretensioned beams(2013-12) Koutrouvelis, Stergios; Tassoulas, John LambrosIn this research, nonlinear structural analysis along with finite element analysis were carried out for a pretensioned concrete beam at different levels of pretension in order to examine the effect of the change in the tendon force on the geometric stiffness of the beam. Several results were obtained for deflection, horizontal displacement and surface strains to investigate how they are affected by the level of pretension under the application of the same load in each case. These computations were compared with the tendon force to conclude whether they can be used to estimate the pretension level by means of simple measurements. The purpose was to develop a methodology for quantifying the prestress losses by taking advantage of the dependence of the prestressed concrete beam stiffness on the tendon force.Item Endogenous variables and weak instruments in cross-sectional nutrient demand and health information analysis: a comparison of solutions(Texas A&M University, 2004-09-30) Bakhtavoryan, Rafael GagikIn recent years, increasing attention has turned toward the effect of health information or health knowledge on nutrient intake. In determining the effect of health information on nutrient demand, researchers face the estimation problem of dealing with the endogeneity of health information knowledge. The standard approach for dealing with this problem is an instrumental variables (IV) procedure. Unfortunately, recent research has demonstrated that the IV procedure may not be reliable in the types of data sets that contain health information and nutrient intakes because the instruments are not sufficiently correlated with the endogenous variables (i.e., instruments are weak). This thesis compares the reliability of the IV procedure (and the Hausman test) with a relatively new procedure, directed graphs, given weak instruments. The goal is to determine if the method of directed graphs performs better in identifying an endogenous variable and also relevant instruments. The performance of the Hausman test and directed graphs are first assessed through conducting a Monte-Carlo sampling experiment containing weak instruments. Because the structure of the model is known in the Monte-Carlo experiment, these results are used as a guideline to determine which procedure would be more reliable in a real world setting. The procedures are then applied to a real-world cross-sectional dataset on nutrient intake. This thesis provides empirical evidence that neither the IV estimator (and Hausman test) or the directed graphs are reliable when instruments are weak, as in a cross-sectional dataset.Item Exploration, Registration, and Analysis of High-Throughput 3D Microscopy Data from the Knife-Edge Scanning Microscope(2014-04-25) Sung, ChulAdvances in high-throughput, high-volume microscopy techniques have enabled the acquisition of extremely detailed anatomical structures on human or animal organs. The Knife-Edge Scanning Microscope (KESM) is one of the first instruments to produce sub-micrometer resolution ( ~1 ?m^(3)) data from whole small animal brains. We successfully imaged, using the KESM, entire mouse brains stained with Golgi (neuronal morphology), India ink (vascular network), and Nissl (soma distribution). Our data sets fill the gap of most existing data sets which have only partial organ coverage or have orders of magnitude lower resolution. However, even though we have such unprecedented data sets, we still do not have a suitable informatics platform to visualize and quantitatively analyze the data sets. This dissertation is designed to address three key gaps: (1) due to the large volume (several tera voxels) and the multiscale nature, visualization alone is a huge challenge, let alone quantitative connectivity analysis; (2) the size of the uncompressed KESM data exceeds a few terabytes and to compare and combine with other data sets from different imaging modalities, the KESM data must be registered to a standard coordinate space; and (3) quantitative analysis that seeks to count every neuron in our massive, growing, and sparsely labeled data is a serious challenge. The goals of my dissertation are as follows: (1) develop an online neuro-informatics framework for efficient visualization and analysis of the multiscale KESM data sets, (2) develop a robust landmark-based 3D registration method for mapping the KESM Nissl-stained entire mouse data into the Waxholm Space (a canonical coordinate system for the mouse brain), and (3) develop a scalable, incremental learning algorithm for cell detection in high-resolution KESM Nissl data. For the web-based neuroinformatics framework, I prepared multi-scale data sets at different zoom levels from the original data sets. And then I extended Google Maps API to develop atlas features such as scale bars, panel browsing, and transparent overlay for 3D rendering. Next, I adapted the OpenLayers API, which is a free mapping and layering API supporting similar functionality as the Google Maps API. Furthermore, I prepared multi-scale data sets in vector-graphics to improve page loading time by reducing the file size. To better appreciate the full 3D morphology of the objects embedded in the data volumes, I developed a WebGL-based approach that complements the web-based framework for interactive viewing. For the registration work, I adapted and customized a stable 2D rigid deformation method to map our data sets to the Waxholm Space. For the analysis of neuronal distribution, I designed and implemented a scalable, effective quantitative analysis method using supervised learning. I utilized Principal Components Analysis (PCA) in a supervised manner and implemented the algorithm using MapReduce parallelization. I expect my frameworks to enable effective exploration and analysis of our KESM data sets. In addition, I expect my approaches to be broadly applicable to the analysis of other high-throughput medical imaging data.Item Finite element analysis of storm shelters subjected to blast loads(Texas Tech University, 2006-08) Zain, Mohammed Aidroos; Kiesling, Ernst W.; Budek, Andrew; Smith, Douglas A.; Murray, John P.The storm shelter designs presented by FEMA 320, Taking Shelter from the Storm: Building a Safe Room Inside Your House, and FEMA 361, Design and Construction Guidance for Community Shelters, were developed to protect people from the potentially disastrous effect of extreme winds. In these publications, a number of prescriptive designs are presented for small residential and community storm shelters. These shelters are designed to withstand wind-induced pressures associated with 250 mph ground level wind speeds generated by worst-case tornadoes. Designs were developed and tested on the basis of debris impact resistance. Experimental studies conducted on a full scale storm shelter revealed that storm shelters built according to FEMA 320 prescriptive designs can withstand wind-induced overpressures much higher than the design values assumed for worst-case tornadoes. This finding suggests that these shelters might withstand loads associated with low-level explosion pressures. The storm shelters were analyzed using the finite element method; the same analytical tools were used to analyze the shelters against blast loads. 3D dynamic and static analyses using the ALGOR finite element software package were used to perform this study. The goals of this research were: studying and applying blast loads on civilian structures; studying the behavior and response of different storm shelters under the effects of blast loads; and studying the differences between the analytical results of the static and dynamic analyses of structures subjected to blast loads. An important objective was to determine the ability of storm shelters to withstand the effects of explosions of various magnitudes at specified distances from the shelters.Item Large-scale network analytics(2011-08) Song, Han Hee, 1978-; Zhang, Yin, doctor of computer scienceScalable and accurate analysis of networks is essential to a wide variety of existing and emerging network systems. Specifically, network measurement and analysis helps to understand networks, improve existing services, and enable new data-mining applications. To support various services and applications in large-scale networks, network analytics must address the following challenges: (i) how to conduct scalable analysis in networks with a large number of nodes and links, (ii) how to flexibly accommodate various objectives from different administrative tasks, (iii) and how to cope with the dynamic changes in the networks. This dissertation presents novel path analysis schemes that effectively address the above challenges in analyzing pair-wise relationships among networked entities. In doing so, we make the following three major contributions to large-scale IP networks, social networks, and application service networks. For IP networks, we propose an accurate and flexible framework for path property monitoring. Analyzing the performance side of paths between pairs of nodes, our framework incorporates approaches that perform exact reconstruction of path properties as well as approximate reconstruction. Our framework is highly scalable to design measurement experiments that span thousands of routers and end hosts. It is also flexible to accommodate a variety of design requirements. For social networks, we present scalable and accurate graph embedding schemes. Aimed at analyzing the pair-wise relationships of social network users, we present three dimensionality reduction schemes leveraging matrix factorization, count-min sketch, and graph clustering paired with spectral graph embedding. As concrete applications showing the practical value of our schemes, we apply them to the important social analysis tasks of proximity estimation, missing link inference, and link prediction. The results clearly demonstrate the accuracy, scalability, and flexibility of our schemes for analyzing social networks with millions of nodes and tens of millions of links. For application service networks, we provide a proactive service quality assessment scheme. Analyzing the relationship between the satisfaction level of subscribers of an IPTV service and network performance indicators, our proposed scheme proactively (i.e., detect issues before IPTV subscribers complain) assesses user-perceived service quality using performance metrics collected from the network. From our evaluation using network data collected from a commercial IPTV service provider, we show that our scheme is able to predict 60% of the service problems that are complained by customers with only 0.1% of false positives.Item Maternal transfer and tissue distribution of HMX in quail eggs(2007-05) Liu, Jun; Cobb, George P.; Smith, Philip N.; Anderson, Todd A.An efficient sample extraction and cleanup method was developed for determination of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) in eggs. The procedure included solvent extraction of HMX from eggs followed by cleanup using florisil and styrene-divinyl benzene (SDB) cartridges. Chromatographic separation was achieved on a reverse phase (RP) C18 column, with a mobile phase containing 60% methanol + 40% 1.0 mM acetic acid aqueous solution. Overall recoveries from eggs containing 10, 50, 250 and 1000 ng/g of HMX were 84.0%, 88.0%, 90.6% and 87.4%. A method detection limit (MDL) of 0.15 ng/g was achieved. Then we evaluated the use of the gas exchange rate as an indicator of chemical stress in avian embryos/eggs. Northern bobwhite quail (Colinus virginianus) were exposed to HMX via feed at concentrations of 0, 12.5, 50.0, and 125.0 mg/kg. Metabolic rates (oxygen consumptions) of incubated quail eggs were then measured via respirometry to examine potential effects of HMX exposure. Metabolic rate was examined on 5, 9, and 21 days of incubation. Next, concentrations of HMX in eggs were determined by liquid chromatography-mass spectrometry. Concentrations of HMX in eggs from the four dose groups were significantly different. Mean (¡À SE) concentrations of HMX in quail eggs were 1025 ¡À 77, 3610 ¡À 143 and 7021 ¡À 300 ng/g in the low, medium and high dose groups respectively. A significant difference in oxygen consumption rates was observed among eggs at the three developmental stages (pItem The metrics of spacecraft design reusability and cost analysis as applied to CubeSats(2012-05) Brumbaugh, Katharine Mary; Lightsey, E. Glenn; Guerra, LisaThe University of Texas at Austin (UT-Austin) Satellite Design Lab (SDL) is currently designing two 3U CubeSat spacecraft – Bevo-2 and ARMADILLO – which serve as the foundation for the design reusability and cost analysis of this thesis. The thesis explores the reasons why a small satellite would want to incorporate a reusable design and the processes needed in order for this reusable design to be implemented for future projects. Design and process reusability reduces the total cost of the spacecraft, as future projects need only alter the components or documents necessary in order to create a new mission. The thesis also details a grassroots approach to determining the total cost of a 3U CubeSat satellite development project and highlights the costs which may be considered non-recurring and recurring in order to show the financial benefit of reusability. The thesis then compares these results to typical models used for cost analysis in industry applications. The cost analysis determines that there is a crucial gap in the cost estimating of nanosatellites which may be seen by comparing two widely-used cost models, the Small Satellite Cost Model (SSCM <100 kg) and the NASA/Air Force Cost Model (NAFCOM), as they apply to a 3U CubeSat project. While each of these models provides a basic understanding of the elements which go into cost estimating, the Cost Estimating Relationships (CERs) do not have enough historical data of picosatellites and nanosatellites (<50 kg) to accurately reflect mission costs. Thus, the thesis documents a discrepancy between widely used industry spacecraft cost models and the needs of the picosatellite and nanosatellite community, specifically universities, to accurately predict their mission costs. It is recommended to develop a nanosatellite/CubeSat cost model with which university and industry developers alike can determine their mission costs during the designing, building and operational stages. Because cost models require the use of many missions to form a database, it is important to start this process now at the beginning of the nanosatellite/CubeSat boom.Item Nonlinear modeling of Texas highway bridges for seismic response-history analysis(2016-12) Prakhov, Vyacheslav Oleksiyovich; Clayton, Patricia M.; Williamson, Eric B., 1968-A recent increase in the number of earthquakes across the state of Texas has raised concerns about seismic performance of highway bridges in the state inventory, the vast majority of which were not explicitly designed to withstand earthquake loading. Potential causes of seismic damage include column shear failure due to low transverse reinforcement rations and non-seismic detailing, girder unseating due to excessive bearing deformation or instability, deck pounding, and others. The objective of the study is to develop bridge numerical models for nonlinear response-history analysis taking into consideration Texas-specific design and detailing practices. Using the models developed, the fragility of Texas bridges can be analyzed and systematically quantified, allowing state highway officials to efficiently identify the bridges most likely to be damaged after an earthquake. Component models for all major bridge parts were developed for this study, including the superstructure, deck joint, bearing, bent, foundation, and abutment. The models were developed based on past experimental, analytical, and numerical work from the literature, accounting for the mass, stiffness, and damping properties of each bridge component. Damage was accounted for using nonlinear hinge models capable of simulating stiffness-degradation and hysteretic behavior based on specific properties and expected limit states of each bridge component. Finally, a MATLAB script was developed to assemble bridge component models into full bridge models depending on user input of geometric and material properties of an individual bridge sample.Item On the crushing of honeycomb under axial compression(2010-12) Wilbert, Adrien; Kyriakides, S.; Ravi-Chandar, KrishnaswamyThis thesis presents a comprehensive study of the compressive response of hexagonal honeycomb panels from the initial elastic regime to a fully crushed state. Expanded aluminum alloy honeycomb panels with a cell size of 0.375 in (9.53 mm), a relative density of 0.026, and a height of 0.625 in (15.9 mm) are laterally compressed quasi statically between rigid platens under displacement control. The cells buckle elastically and collapse at a higher stress due to inelastic action. Deformation then first localizes at mid-height and the cells crush by progressive formation of folds; associated with each fold family is a stress undulation. The response densifies when the whole panel height is consumed by folds. The buckling, collapse, and crushing events are simulated numerically using finite element models involving periodic domains of a single or several characteristic cells. The models idealize the microstructure as hexagonal, with double walls in one direction. The nonlinear behavior is initiated by elastic buckling while inelastic collapse that leads to the localization observed in the experiments occurs at a significantly higher load. The collapse stress is found to be mildly sensitive to various problem imperfections. For the particular honeycomb studied, the collapse stress is 67% higher than the buckling stress. It was also shown that all aspects of the compressive behavior can be reproduced numerically using periodic domains with a fine mesh capable of capturing the complexity of the folds. The calculated buckling stress is reduced when considering periodic square domains as the compatibility of the buckles between neighboring cells tends to make the structure more compliant. The mode consisting of three half waves is observed in every simulation but its amplitude is seen to be accented at the center of the domains. The calculated crushing response is shown to better resemble measured ones when a 4x4 cell domain is used, which is smoother and reproduces decays in the amplitude of load peaks. However, the average crushing stress can be captured with engineering accuracy even from a single cell domain.Item Perfromance analysis of the Parallel Community Atmosphere Model (CAM) application(2009-06-02) Shawky Sharkawi, Sameh SherifEfficient execution of parallel applications requires insight into how the parallel system features impact the performance of the application. Significant experimental analysis and the development of performance models enhance the understanding of such an impact. Deep understanding of an application?s major kernels and their design leads to a better understanding of the application?s performance, and hence, leads to development of better performance models. The Community Atmosphere Model (CAM) is the latest in a series of global atmospheric models developed at the National Center for Atmospheric Research (NCAR) as a community tool for NCAR and the university research community. This work focuses on analyzing CAM and understanding the impact of different architectures on this application. In the analysis of CAM, kernel coupling, which quantifies the interaction between adjacent and chains of kernels in an application, is used. All experiments are conducted on four parallel platforms: NERSC (National Energy Research Scientific Computing Center) Seaborg, SDSC (San Diego Supercomputer Center) DataStar P655, DataStar P690 and PSC (Pittsburgh Supercomputing Center) Lemieux. Experimental results indicate that kernel coupling gave an insight into many of the application characteristics. One important characteristic of CAM is that its performance is heavily dependent on a parallel platform memory hierarchy; different cache sizes and different cache policies had the major effect on CAM?s performance. Also, coupling values showed that although CAM?s kernels share many data structures, most of the coupling values are still destructive (i.e., interfering with each other so as to adversely affect performance). The kernel coupling results helps developers in pointing out the bottlenecks in memory usage in CAM. The results obtained from processor partitioning are significant in helping CAM users in choosing the right platform to run CAM.Item Process Synthesis and Optimization of Biorefinery Configurations(2012-10-19) Pham, VietThe objective of this research was to develop novel and applicable methodologies to solve systematically problems along a roadmap of constructing a globally optimum biorefinery design. The roadmap consists of the following problems: (1) synthesis of conceptual biorefinery pathways from given feedstocks and products, (2) screening of the synthesized pathways to identify the most economic pathways, (3) development of a flexible biorefinery configuration, and (4) techno-economic analysis of a detailed biorefinery design. In the synthesis problem, a systems-based "forward-backward" approach was developed. It involves forward synthesis of biomass to possible intermediates and reverse synthesis starting with desired products and identifying necessary species and pathways leading to them. Then, two activities are performed to generate complete biorefinery pathways: matching (if one of the species synthesized in the forward step is also generated by the reverse step) or interception (a task is determined to take a forward-generated species with a reverse-generated species by identifying a known process or by using reaction pathway synthesis to link to two species.) In the screening problem, the Bellman's Principle of Optimality was applied to decompose the optimization problem into sub-problems in which an optimal policy of available technologies was determined for every conversion step. Subsequently, either a linear programming formulation or dynamic programming algorithm was used to determine the optimal pathways. In the configuration design problem, a new class of design problems with flexibility was proposed to build the most profitable plants that operate only when economic efficiency is favored. A new formulation approach with proposed constraints called disjunctive operation mode was also developed to solve the design problems. In the techno-economic analysis for a detailed design of biorefinery, the process producing hydrocarbon fuels from lignocellulose via the carboxylate platform was studied. This analysis employed many state-of-the-art chemical engineering fundamentals and used extensive sources of published data and advanced computing resources to yield reliable conclusions to the analysis. Case studies of alcohol-producing pathways from lignocellulosic biomass were discussed to demonstrate the merits of the proposed approaches in the former three problems. The process was extended to produce hydrocarbon fuels in the last problem.Item Scalable Analysis, Verification and Design of IC Power Delivery(2012-02-14) Zeng, ZhiyuDue to recent aggressive process scaling into the nanometer regime, power delivery network design faces many challenges that set more stringent and specific requirements to the EDA tools. For example, from the perspective of analysis, simulation efficiency for large grids must be improved and the entire network with off-chip models and nonlinear devices should be able to be analyzed. Gated power delivery networks have multiple on/off operating conditions that need to be fully verified against the design requirements. Good power delivery network designs not only have to save the wiring resources for signal routing, but also need to have the optimal parameters assigned to various system components such as decaps, voltage regulators and converters. This dissertation presents new methodologies to address these challenging problems. At first, a novel parallel partitioning-based approach which provides a flexible network partitioning scheme using locality is proposed for power grid static analysis. In addition, a fast CPU-GPU combined analysis engine that adopts a boundary-relaxation method to encompass several simulation strategies is developed to simulate power delivery networks with off-chip models and active circuits. These two proposed analysis approaches can achieve scalable simulation runtime. Then, for gated power delivery networks, the challenge brought by the large verification space is addressed by developing a strategy that efficiently identifies a number of candidates for the worst-case operating condition. The computation complexity is reduced from O(2^N) to O(N). At last, motivated by a proposed two-level hierarchical optimization, this dissertation presents a novel locality-driven partitioning scheme to facilitate divide-and-conquer-based scalable wire sizing for large power delivery networks. Simultaneous sizing of multiple partitions is allowed which leads to substantial runtime improvement. Moreover, the electric interactions between active regulators/converters and passive networks and their influences on key system design specifications are analyzed comprehensively. With the derived design insights, the system-level co-design of a complete power delivery network is facilitated by an automatic optimization flow. Results show significant performance enhancement brought by the co-design.Item Seismic Attribute Analysis Using Higher Order Statistics(2009-05-15) Greenidge, Janelle CandiceSeismic data processing depends on mathematical and statistical tools such as convolution, crosscorrelation and stack that employ second-order statistics (SOS). Seismic signals are non-Gaussian and therefore contain information beyond SOS. One of the modern challenges of seismic data processing is reformulating algorithms e.g. migration, to utilize the extra higher order statistics (HOS) information in seismic data. The migration algorithm has two key components: the moveout correction, which corresponds to the crosscorrelation of the migration operator with the data at zero lag and the stack of the moveout-corrected data. This study reformulated the standard migration algorithm to handle the HOS information by improving the stack component, having assumed that the moveout correction is accurate. The reformulated migration algorithm outputs not only the standard form of stack, but also the variance, skewness and kurtosis of moveout-corrected data. The mean (stack) of the moveout-corrected data in this new concept is equivalent to the migration currently performed in industry. The variance of moveout-corrected data is one of the new outputs obtained from the reformulation. Though it characterizes SOS information, it is not one of the outputs of standard migration. In cases where the seismic amplitude variation with offset (AVO) response is linear, a single algorithm that outputs mean (stack) and variance combines both the standard AVO analysis and migration, thereby significantly improving the cost of seismic data processing. Furthermore, this single algorithm improves the resolution of seismic imaging, since it does not require an explicit knowledge of reflection angles to retrieve AVO information. In the reformulation, HOS information is captured by the skewness and kurtosis of moveout-corrected data. These two outputs characterize nonlinear AVO response and non-Gaussian noise (symmetric and nonsymmetric) that may be contained in the data. Skewness characterizes nonsymmetric, non-Gaussian noise, whereas kurtosis characterizes symmetric, non-Gaussian noise. These outputs also characterize any errors associated with moveout corrections. While classical seismic data processing provides a single output, HOS-related processing outputs three extra parameters i.e. the variance, skewness, and kurtosis. These parameters can better characterize geological formations and improve the accuracy of the seismic data processing performed before the application of the reformulated migration algorithm.