Browsing by Subject "Reliability"
Now showing 1 - 20 of 54
Results Per Page
Sort Options
Item A Probabilistic Deformation Demand Model and Fragility Estimates for Asymmetric Offshore Jacket Platforms(2012-11-12) Fallon, Michael BrooksInterest in evaluating the performance and safety of offshore oil and gas platforms has been expanding due to the growing world energy supply and recent offshore catastrophes. In order to accurately assess the reliability of an offshore platform, all relevant uncertainties must be properly accounted for. This necessitates the development of a probabilistic demand model that accounts for the relevant uncertainties and model errors. In this study, a probabilistic demand model is developed to assess the deformation demand on asymmetric offshore jacket platforms subject to wave and current loadings. The probabilistic model is constructed by adding correction terms and a model error to an existing deterministic deformation demand model. The correction terms are developed to capture the bias inherent in the deterministic model. The model error is developed to capture the accuracy of the model. The correction terms and model errors are estimated through a Bayesian approach using simulation data obtained from detailed dynamic analyses of a set of representative asymmetric offshore platform configurations. The proposed demand model provides accurate and unbiased estimates of the deformation demand on offshore jacket platforms. The developed probabilistic demand model is then used to assess the reliability of a typical offshore platform considering serviceability and ultimate performance levels. In addition, a sensitivity analysis is conducted to assess the effect of key parameters on the results of the analyses. The proposed demand model can be used to assess the reliability of different design options and for the reliability-based optimal design of offshore jacket platforms.Item Accounting for the effects of rehabilitation actions on the reliability of flexible pavements: performance modeling and optimization(2009-05-15) Deshpande, Vighnesh PrakashA performance model and a reliability-based optimization model for flexible pavements that accounts for the effects of rehabilitation actions are developed. The developed performance model can be effectively implemented in all the applications that require the reliability (performance) of pavements, before and after the rehabilitation actions. The response surface methodology in conjunction with Monte Carlo simulation is used to evaluate pavement fragilities. To provide more flexibility, the parametric regression model that expresses fragilities in terms of decision variables is developed. Developed fragilities are used as performance measures in a reliability-based optimization model. Three decision policies for rehabilitation actions are formulated and evaluated using a genetic algorithm. The multi-objective genetic algorithm is used for obtaining optimal trade-off between performance and cost. To illustrate the developed model, a numerical study is presented. The developed performance model describes well the behavior of flexible pavement before as well as after rehabilitation actions. The sensitivity measures suggest that the reliability of flexible pavements before and after rehabilitation actions can effectively be improved by providing an asphalt layer as thick as possible in the initial design and improving the subgrade stiffness. The importance measures suggest that the asphalt layer modulus at the time of rehabilitation actions represent the principal uncertainty for the performance after rehabilitation actions. Statistical validation of the developed response model shows that the response surface methodology can be efficiently used to describe pavement responses. The results for parametric regression model indicate that the developed regression models are able to express the fragilities in terms of decision variables. Numerical illustration for optimization shows that the cost minimization and reliability maximization formulations can be efficiently used in determining optimal rehabilitation policies. Pareto optimal solutions obtained from multi-objective genetic algorithm can be used to obtain trade-off between cost and performance and avoid possible conflict between two decision policies.Item An Integrative Approach to Reliability Analysis of an IEC 61850 Digital Substation(2012-11-28) Zhang, Yan 1988-In recent years, reliability evaluation of substation automation systems has received a significant attention from the research community. With the advent of the concept of smart grid, there is a growing trend to integrate more computation and communication technology into power systems. This thesis focuses on the reliability evaluation of modern substation automation systems. Such systems include both physical devices (current carrying) such as lines, circuit breakers, and transformers, as well as cyber devices (Ethernet switches, intelligent electronic devices, and cables) and belong to a broader class of cyber-physical systems. We assume that the substation utilizes IEC 61850 standard, which is a dominant standard for substation automation. Focusing on IEC 61850 standard, we discuss the failure modes and analyze their effects on the system. We utilize reliability block diagrams for analyzing the reliability of substation components (bay units) and then use the state space approach to study the effects at the substation level. Case study is based on an actual IEC 61850 substation automation system, with different network topologies consideration concluded. Our analysis provides a starting point for evaluating the reliability of the substation and the effects of substation failures to the rest of the power system. By using the state space methods, the steady state probability of each failure effects were calculated in different bay units. These probabilities can be further used in the modeling of the composite power system to analyze the loss of load probabilities.Item Analyses of power system vulnerability and total transfer capability(Texas A&M University, 2006-04-12) Yu, XingbinModern power systems are now stepping into the post-restructuring era, in which utility industries as well as ISOs (Independent System Operators) are involved. Attention needs to be paid to the reliability study of power systems by both the utility companies and the ISOs. An uninterrupted and high quality power is required for the sustainable development of a technological society. Power system blackouts generally result from cascading outages. Protection system hidden failures remain dormant when everything is normal and are exposed as a result of other system disturbances. This dissertation provides new methods for power system vulnerability analysis including protection failures. Both adequacy and security aspects are included. The power system vulnerability analysis covers the following issues: 1) Protection system failure analysis and modeling based on protection failure features; 2) New methodology for reliability evaluation to incorporate protection system failure modes; and, 3) Application of variance reduction techniques and evaluation. A new model of current-carrying component paired with its associated protection system has been proposed. The model differentiates two protection failure modes, and it is the foundation of the proposed research. Detailed stochastic features of system contingencies and corresponding responses are considered. Both adequacy and security reliability indices are computed. Moreover, a new reliability index ISV (Integrated System Vulnerability) is introduced to represent the integrated reliability performance with consideration of protection system failures. According to these indices, we can locate the weakest point or link in a power system. The whole analysis procedure is based on a non-sequential Monte Carlo simulation method. In reliability analysis, especially with Monte Carlo simulation, computation time is a function not only of a large number of simulations, but also time-consuming system state evaluation, such as OPF (Optimal Power Flow) and stability assessment. Theoretical and practical analysis is conducted for the application of variance reduction techniques. The dissertation also proposes a comprehensive approach for a TTC (Total Transfer Capability) calculation with consideration of thermal, voltage and transient stability limits. Both steady state and dynamic security assessments are included in the process of obtaining total transfer capability. Particularly, the effect of FACTS (Flexible AC Transmission Systems) devices on TTC is examined. FACTS devices have been shown to have both positive and negative effects on system stability depending on their location. Furthermore, this dissertation proposes a probabilistic method which gives a new framework for analyzing total transfer capability with actual operational conditions.Item Analysis of performance and reliability of offshore pile foundation systems based on hurricane loading(2011-05) Chen, Jiun-Yih; Gilbert, Robert B. (Robert Bruce), 1965-; Stokoe, II, Kenneth H.; Manuel, Lance; Bickel, J. Eric; Murff, James D.Jacket platforms are fixed base offshore structures used to produce oil and gas in relatively shallow waters worldwide. Their pile foundation systems seemed to perform better than what they were designed for during severe hurricanes. This observation has led to a common belief in the offshore oil and gas industry that foundation design is overly conservative. The objective of this research is to provide information to help improve the state of practice in designing and assessing jacket pile foundations to achieve a consistent level of performance and reliability. A platform database consisting of 31 structures was compiled and 13 foundation systems were analyzed using a simplified foundation collapse model, supplemented by a 3-D structural model. The predicted performance for most of the 13 platform foundations is consistent with their observed performance. These cases do not preclude potential conservatism in foundation design because only a small number of platform foundations were analyzed and only one of them actually failed. The potential failure mechanism of a foundation system is an important consideration for its performance in the post-hurricane assessment. Structural factors can be more important than geotechnical factors on foundation system capacity. Prominent structural factors include the presence of well conductors and jacket leg stubs, yield stress of piles and conductors, axial flexibility of piles, rigidity and strength of jackets, and robustness of foundation systems. These factors affect foundation system capacity in a synergistic manner. Sand layers play an important role in the performance of three platform foundations exhibiting the largest discrepancy between predicted and observed performance. Site-specific soil borings are not available in these cases. Higher spatial variability in pile capacity can be expected in alluvial or fluviatile geology with interbedded sands and clays. The uncertainties in base shear and overturning moment in the load are approximately the same and they are slightly higher than the uncertainty in the overturning capacity of a 3-pile foundation system. The uncertainty in the overturning capacity of this foundation system is higher than the uncertainty in shear capacity. These uncertainties affect the reliability of this foundation system.Item Characterization of voltage noise in big, small and single-ISA heterogeneous systems(2013-05) Garg, Ankita; John, Lizy Kurian; Reddi, Vijay JanapaSensitivity of the microprocessor to voltage fluctuations is becoming a major concern with growing emphasis on designing power-efficient microprocessors. Voltage fluctuations that exceed a certain threshold cause "emergencies" that can lead to timing errors in the processor, thus risking reliability. To guarantee correctness under such conditions, large voltage guardbands are employed, at the cost of reduced performance and wastage of power. Trends in microprocessor technology indicate that worst-case operating voltage margins are not sustainable. Since voltage emergencies occur only infrequently, resilient architectures with aggressive guardbands are needed. However, to enable the exploration of the design space of resilient processors, it is important to have a deep understanding of the characteristics of voltage noise in different system configurations. Prior research in this area has mostly focused on systems with very few cores. Given the increasing relevance of large multi-core systems, this thesis presents a detailed characterization of voltage noise on chip multi-processors, consisting of large number of cores. The data indicates that while the worst case voltage droop increases with increase in the number of cores, the frequency of occurrence of the droops is not greatly impacted, emphasizing the feasibility of employing resilient microarchitectures with aggressive voltage margins. The thesis also presents a comparative study of voltage noise in CMPs consisting of either high-performant out-of-order cores and power-efficient in-order cores. The study highlights that the out-of-order cores experience much larger voltage variations when compared to the in-order cores, but offer a clear advantage in terms of performance. Experiments indicate that in-order configurations that offer equivalent performance to the out-of-order cores result in large energy-delay product, indicating the trade-offs involved in designing for performance, power and reliability. The thesis also presents a study of voltage noise in single-ISA heterogeneous configurations, to highlight the benefits of such systems towards lowering the worst-case voltage margins, which improve both performance and power. The experimental results indicate that the worst-case voltage droop in such heterogeneous systems lies in between the out-of-order and in-order cores and provide reasonable power-efficiency and performance. Further, the work highlights the importance of exploring the design-space of heterogeneous systems considering reliability as an important design criteria.Item Design modification for the modular helium reactor for higher temperature operation and reliability studies for nuclear hydrogen production processes(2009-05-15) Reza, S.M. MohsinDesign options have been evaluated for the Modular Helium Reactor (MHR) for higher temperature operation. An alternative configuration for the MHR coolant inlet flow path is developed to reduce the peak vessel temperature (PVT). The coolant inlet path is shifted from the annular path between reactor core barrel and vessel wall through the permanent side reflector (PSR). The number and dimensions of coolant holes are varied to optimize the pressure drop, the inlet velocity, and the percentage of graphite removed from the PSR to create this inlet path. With the removal of ~10% of the graphite from PSR the PVT is reduced from 541 0C to 421 0C. A new design for the graphite block core has been evaluated and optimized to reduce the inlet coolant temperature with the aim of further reduction of PVT. The dimensions and number of fuel rods and coolant holes, and the triangular pitch have been changed and optimized. Different packing fractions for the new core design have been used to conserve the number of fuel particles. Thermal properties for the fuel elements are calculated and incorporated into these analyses. The inlet temperature, mass flow and bypass flow are optimized to limit the peak fuel temperature (PFT) within an acceptable range. Using both of these modifications together, the PVT is reduced to ~350 0C while keeping the outlet temperature at 950 0C and maintaining the PFT within acceptable limits. The vessel and fuel temperatures during low pressure conduction cooldown and high pressure conduction cooldown transients are found to be well below the design limits. The reliability and availability studies for coupled nuclear hydrogen production processes based on the sulfur iodine thermochemical process and high temperature electrolysis process have been accomplished. The fault tree models for both these processes are developed. Using information obtained on system configuration, component failure probability, component repair time and system operating modes and conditions, the system reliability and availability are assessed. Required redundancies are made to improve system reliability and to optimize the plant design for economic performance. The failure rates and outage factors of both processes are found to be well below the maximum acceptable range.Item Design, Simulation, and Analysis of Substation Automation Networks(2012-07-16) Kembanur Natarajan, ElangovanSociety depends on computer networks for communication. The networks were built to support and facilitate several important applications such as email, web browsing and instant messaging. Recently, there is a significant interest in leveraging modern local and wide area communication networks for improving reliability and performance in critical infrastructures. Emerging critical infrastructure applications, such as smart grid, require a certain degree of reliability and Quality of Service (QoS). Supporting these applications requires network protocols that enable delay sensitive packet delivery and packet prioritization. However, most of the traditional networks are designed to provide best effort service without any support for QoS. The protocols used in these networks do not support packet prioritization, delay requirements and reliability. In this thesis, we focus on the design and analysis of communication protocols for supporting smart grid applications. In particular, we focus on the Substation Automation Systems (SAS). Substations are nodes in the smart grid infrastructure that help the in transportation of power by connecting the transmission and distribution lines. The SAS applications are con figured to operate with minimal human intervention. The SAS monitors the line loads continuously. If the load values are too high and can lead to damage, the SAS declares those conditions as faults. On fault detection, the SAS must take care of the communication with the relay to open the circuit to prevent any damage. These messages are of high priority and require reliable, delay sensitive delivery. There is a threshold for the delay of these messages, and a slight increase in the delay above the threshold might cause severe damages. Along with such high priority messages, the SAS has a lot of background traffic as well. In spite of the background traffic, the substation network must take care of delivering the priority messages on time. Hence, the network plays a vital role in the operation of the substation. Networks designed for such applications should be analyzed carefully to make sure that the requirements are met properly. We analyzed and compared the performance of the SAS under di erent network topologies. By observing the characteristics of the existing architectures, we came up with new architectures that perform better. We have suggested several modi cations to existing solutions that allow significant improvement in the performance of the existing solutions.Item Designing shipboard electrical distribution systems for optimal reliability(2013-12) Stevens, McKay Benjamin; Santoso, SuryaAnalysis was performed to quantify and compare the reliability of several different notional shipboard DC distribution system topologies in serving their equipment loads. Further, the relationship between the relative placement of loads and generators within a distribution system and the system’s reliability was investigated, resulting in an algorithmically-derived optimal placement configuration in the system topology found to be the most reliable in the initial analysis. Using Markov models and fault-tree analysis, system reliability indices were derived from distribution system component reliability indices, and these values were compared between competing topologies and equipment configurations. A distribution system based on the breaker-and-a-half topology often used in terrestrial utility substations was found to be superior in terms of reliability to the currently-standard ring bus topology. Expected rates of service interruptions to equipment systems served by the breaker-and-a-half system were reduced overall, in some cases dropping dramatically to less than one expected interruption per 10,000 years. This improvement, however, came at the expense of requiring more circuit breakers in the distribution system’s construction. Within this breaker-and-a-half distribution system, an optimal placement of loads and generators was algorithmically derived, which further improved the reliability of the system. This improvement over the base case was marginal, but the optimized placement configuration was able to reduce the expected interruption rate of the ship’s radar system by over 40%.Item Deterministic and probabilistic analyses of offshore pile systems(2016-08) Chen, Jinbo, Ph. D.; Gilbert, Robert B. (Robert Bruce), 1965-; Manuel, Lance; Kallivokas , Loukas F.; Cox , Brady; Murff , James DonThe offshore pile system capacity and the pile capacity model biases are important aspects in the assessment of existing offshore platforms and in the performance reliability that is achieved using the state of practice. The objectives of this research are to improve understanding of the pile system behavior, to calibrate the pile system capacity model bias factors, and to evaluate the reliabilities of offshore pile systems. A simplified single pile failure surface in terms of three dimensional pile head loads is proposed based on the analytical lower and upper solutions, and is verified through finite element analyses. Numerical lower and upper bound models are then proposed for the ultimate capacity of a pile system, and are shown to be efficient and be effective in considering global torsion and out-of-plane failures. The evidence from the survival of offshore platforms indicates that (1) well conductors should be included in assessing the pile system ultimate capacity; (2) static p-y curves should be used which increases the pile system lateral capacity by 10 to 20%; (3) the mean value of the steel yield strength should be used; (4) jacket leg stubs should be included; and (5) site-specific geotechnical information is important. The model bias factors in the API load and resistance design recipe are calibrated through Bayes’ Theorem based on the predicted and observed performance of eighteen offshore platforms in recent Gulf of Mexico hurricanes. The API load and resistance design recipe is calibrated to be close to unbiased for predicting the jacket system performance; be slightly conservative for predicting a foundation overturning failure in clay; and be conservative for predicting a lateral failure in clay and a foundation overturning failure in sand. The reliability of a pile system is shown to be insensitive to water depths and locations in the Gulf of Mexico, but depends on the pile layout, number of piles, loading direction, and expected failure mode. The pile system redundancy (a measure of capacity beyond failure of the first element) and robustness (a measure of capacity when the system is damaged) depend on the failure mode, pile geometry and layout, and loading directions. In general, the 8-leg pile system is more redundant and more robust than the 3-leg and 4-leg pile systems. The complexity (a measure of the how well the most critically-loaded element represents all elements) depends on the pile layout, the expected failure mode of a single pile and the pile capacity uncertainty. The complexity is generally small, indicating that the failure probability of the most critically-loaded pile is representative of the failure probabilities for all piles.Item Development of reliable pavement models(2011-08) Aguiar Moya, José Pablo, 1981-; Prozzi, Jorge Alberto; Manuel, Lance; Walton, Michael; Machemehl, Randy B.; Yilmaz, HilalAs the cost of designing and building new highway pavements increases and the number of new construction and major rehabilitation projects decreases, the importance of ensuring that a given pavement design performs as expected in the field becomes vital. To address this issue in other fields of civil engineering, reliability analysis has been used extensively. However, in the case of pavement structural design, the reliability component is usually neglected or overly simplified. To address this need, the current dissertation proposes a framework for estimating the reliability of a given pavement structure regardless of the pavement design or analysis procedure that is being used. As part of the dissertation, the framework is applied with the Mechanistic-Empirical Pavement Design Guide (MEPDG) and failure is considered as a function of rutting of the hot-mix asphalt (HMA) layer. The proposed methodology consists of fitting a response surface, in place of the time-demanding implicit limit state functions used within the MEPDG, in combination with an analytical approach to estimating reliability using second moment techniques: First-Order and Second-Order Reliability Methods (FORM and SORM) and simulation techniques: Monte Carlo and Latin Hypercube Simulation. In order to demonstrate the methodology, a three-layered pavement structure is selected consisting of a hot-mix asphalt (HMA) surface, a base layer, and subgrade. Several pavement design variables are treated as random; these include HMA and base layer thicknesses, base and subgrade modulus, and HMA layer binder and air void content. Information on the variability and correlation between these variables are obtained from the Long-Term Pavement Performance (LTPP) program, and likely distributions, coefficients of variation, and correlation between the variables are estimated. Additionally, several scenarios are defined to account for climatic differences (cool, warm, and hot climatic regions), truck traffic distributions (mostly consisting of single unit trucks versus mostly consisting of single trailer trucks), and the thickness of the HMA layer (thick versus thin). First and second order polynomial HMA rutting failure response surfaces with interaction terms are fit by running the MEPDG under a full factorial experimental design consisting of 3 levels of the aforementioned design variables. These response surfaces are then used to analyze the reliability of the given pavement structures under the different scenarios. Additionally, in order to check for the accuracy of the proposed framework, direct simulation using the MEPDG was performed for the different scenarios. Very small differences were found between the estimates based on response surfaces and direct simulation using the MEPDG, confirming the accurateness of the proposed procedure. Finally, sensitivity analysis on the number of MEPDG runs required to fit the response surfaces was performed and it was identified that reducing the experimental design by one level still results in response surfaces that properly fit the MEPDG, ensuring the applicability of the method for practical applications.Item Dynamic resource allocation for energy management in data centers(2009-05-15) Rincon Mateus, Cesar AugustoIn this dissertation we study the problem of allocating computational resources and managing applications in a data center to serve incoming requests in such a way that the energy usage, reliability and quality of service considerations are balanced. The problem is motivated by the growing energy consumption by data centers in the world and their overall inefficiency. This work is focused on designing flexible and robust strategies to manage the resources in such a way that the system is able to meet the service agreements even when the load conditions change. As a first step, we study the control of a Markovian queueing system with controllable number of servers and service rates (M=Mt=kt ) to minimize effort and holding costs. We present structural properties of the optimal policy and suggest an algorithm to find good performance policies even for large cases. Then we present a reactive/proactive approach, and a tailor-made wavelet-based forecasting procedure to determine the resource allocation in a single application setting; the method is tested by simulation with real web traces. The main feature of this method is its robustness and flexibility to meet QoS goals even when the traffic behavior changes. The system was tested by simulating a system with a time service factor QoS agreement. Finally, we consider the multi-application setting and develop a novel load consolidation strategy (of combining applications that are traditionally hosted on different servers) to reduce the server-load variability and the number of booting cycles in order to obtain a better capacity allocation.Item Dynamic response and reliability analysis of an offshore wind turbine supported by a semi-submersible platform(2015-12) Thomas, Edwin, M.S. in Engineering; Manuel, LanceWind Energy is the fastest growing renewable energy source in the world. The trend is expected to continue with falling costs of technology, energy security concerns and the need to address environmental issues. Offshore wind turbines have a few important advantages over land-based turbines; offshore sites experience stronger and less turbulent winds, there are fewer negative aesthetic impacts in an offshore location, there is greater ease in the transport of wind turbine components over sea than on land, etc. Large offshore wind turbines mounted atop floating platforms offer a viable solution for deepwater sites. Of the various floating platform concepts that are being considered, a moored semi-submersible platform is considered in this study. The dynamic response and reliability analysis of a 13.2~MW offshore wind turbine supported by a moored semi-submersible platform is the subject of this study. A model for this integrated system has been developed and its various physical, geometric, and dynamic properties have been studied in this and another associated study. Loads data for the extreme and fatigue analysis of such systems are generally attained by running time-domain simulations for a range of sea states that are representative of the expected site-specific metocean conditions. The selected site of interest in the North Sea has a water depth of 200 m. The Environmental Contour (EC) method is used to identify sea states of interest that are associated with a target return period (50 years). These sea states are considered in short-term (1-hour) simulations of the integrated turbine-platform-mooring system. The dynamic behavior of the integrated wind turbine system is studied. Critical sea states for the various response loads are identified and the sensitivity of the system to the metocean conditions is discussed. Estimation of 50-year response levels (for turbine loads, platform motions, and the mooring line tension at the fairlead) associated with the target probability is subsequently carried out using 2D and 3D Inverse First-Order Reliability Method (FORM) approaches.Item Empirical Measurements of Travelers' Value of Travel Time Reliability(2014-08-12) Danda, Santosh RaoTravel time and travel time reliability are two fundamental factors influencing travel behavior and demand. The concept of the value of time (VOT) has been extensively studied, and estimates of VOT have been obtained from surveys and empirical data. On the other hand, although the importance of value of reliability (VOR) is appreciated, research related to VOR is still in its early stages. VORs have been estimated using surveys but has almost never estimated using empirical data. This research used empirical data to take an initial step toward understanding the importance of travel time reliability. Katy Freeway travelers face a daily choice between reliable tolled lanes and less reliable but untolled lanes. An extensive dataset of Katy Freeway travel was used to examine the influence of time, reliability, and toll on lane-choice behavior. Lack of clarity on how travelers? perceive travel time reliability meant different measures of reliability had to be tested to see which best represents travelers? perception. In this research, three different measures of reliability were used, namely, standard deviation of travel time, coefficient of variation of travel time and travel time standard deviation relative to the total trip time. Lane choice was estimated using multinomial logit models. Basic models, including only travel time and toll, yielded reasonable results. Models included VOTs of $1.53/hour, $6.05/hour, and $9.05/hour for off-peak, shoulder, and peak-period travelers, respectively. However, Adding different measures of reliability like standard deviation and coefficient of variation to the models resulted in counter-intuitive results. Positive coefficients for unreliability of travel time were obtained indicating that travelers, at least on the Katy Freeway, do not value travel time reliability as has been theorized in earlier studies on the same. It was concluded that additional research on how travelers perceive the reliability and time savings on MLs is needed because modeling real-world choices of MLs using empirical data and the standard definitions of reliability and time savings did not concur with the existing theory on travel time reliability and led to counter-intuitive results.Item Energy and Reliability in Future NOC Interconnected CMPS(2013-08-01) Kim, HyungjunIn this dissertation, I explore energy and reliability in future NoC (Network-on-Chip) interconnected CMPs (chip multiprocessors) as they have become a first-order constraint in future CMP design. In the first part, we target the root cause of network energy consumption through techniques that reduce link and router-level switching activity. We specifically focus on memory subsystem traffic, as it comprises the bulk of NoC load in a CMP. By transmitting only the flits that contain words that we predicted would be useful using a novel spatial locality predictor, our scheme seeks to reduce network activity. We aim to further lower NoC energy consumption through microarchitectural mechanisms that inhibit datapath switching activity caused by unused words in individual flits. Using simulation-based performance studies and detailed energy models based on synthesized router designs and different link wire types, we show that (a) the pre- diction mechanism achieves very high accuracy, with an average rate of false-unused prediction of just 2.5%; (b) the combined NoC energy savings enabled by the predictor and microarchitectural support are 36% on average and up to 57% in the best case; and (c) there is no system performance penalty as a result of this technique. In the second part, we present a method for dynamic voltage/frequency scaling of networks-on-chip and last level caches in CMP designs, where the shared resources form a single voltage/frequency domain. We develop a new technique for monitoring and control and validate it by running PARSEC benchmarks through full system simulations. These techniques reduce energy-delay product by 46% compared to a state-of-the-art prior work. In the third part, we develop critical path models for HCI- and NBTI-induced wear assuming stress caused under realistic workload conditions, and apply them onto the interconnect microarchitecture. A key finding from this modeling is that, counter to prevailing wisdom, wearout in the CMP on-chip interconnect is correlated with a lack of load observed in the NoC routers, rather than high load. We then develop a novel wearout-decelerating scheme in which routers under low load have their wearout-sensitive components exercised without significantly impacting the router?s cycle time, pipeline depth, and area or power consumption. We subsequently show that the proposed design yields a 13.8?65? increase in CMP lifetime.Item Evaluation of power system security and development of transmission pricing method(Texas A&M University, 2004-11-15) Kim, HyungchulThe electric power utility industry is presently undergoing a change towards the deregulated environment. This has resulted in unbundling of generation, transmission and distribution services. The introduction of competition into unbundled electricity services may lead system operation closer to its security boundaries resulting in smaller operating safety margins. The competitive environment is expected to lead to lower price rates for customers and higher efficiency for power suppliers in the long run. Under this deregulated environment, security assessment and pricing of transmission services have become important issues in power systems. This dissertation provides new methods for power system security assessment and transmission pricing. In power system security assessment, the following issues are discussed 1) The description of probabilistic methods for power system security assessment 2) The computation time of simulation methods 3) on-line security assessment for operation. A probabilistic method using Monte-Carlo simulation is proposed for power system security assessment. This method takes into account dynamic and static effects corresponding to contingencies. Two different Kohonen networks, Self-Organizing Maps and Learning Vector Quantization, are employed to speed up the probabilistic method. The combination of Kohonen networks and Monte-Carlo simulation can reduce computation time in comparison with straight Monte-Carlo simulation. A technique for security assessment employing Bayes classifier is also proposed. This method can be useful for system operators to make security decisions during on-line power system operation. This dissertation also suggests an approach for allocating transmission transaction costs based on reliability benefits in transmission services. The proposed method shows the transmission transaction cost of reliability benefits when transmission line capacities are considered. The ratio between allocation by transmission line capacity-use and allocation by reliability benefits is computed using the probability of system failure.Item Exploring scaling limits and computational paradigms for next generation embedded systems(2009-12) Zykov, Andrey V.; De Veciana, GustavoIt is widely recognized that device and interconnect fabrics at the nanoscale will be characterized by a higher density of permanent defects and increased susceptibility to transient faults. This appears to be intrinsic to nanoscale regimes and fundamentally limits the eventual benefits of the increased device density, i.e., the overheads associated with achieving fault-tolerance may counter the benefits of increased device density -- density-reliability tradeoff. At the same time, as devices scale down one can expect a higher proportion of area to be associated with interconnection, i.e., area is wire dominated. In this work we theoretically explore density-reliability tradeoffs in wire dominated integrated systems. We derive an area scaling model based on simple assumptions capturing the salient features of hierarchical design for high performance systems, along with first order assumptions on reliability, wire area, and wire length across hierarchical levels. We then evaluate overheads associated with using basic fault-tolerance techniques at different levels of the design hierarchy. This, albeit simplified model, allows us to tackle several interesting theoretical questions: (1) When does it make sense to use smaller less reliable devices? (2) At what scale of the design hierarchy should fault tolerance be applied in high performance integrated systems? In the second part of this thesis we explore perturbation-based computational models as a promising choice for implementing next generation ubiquitous information technology on unreliable nanotechnologies. We show the inherent robustness of such computational models to high defect densities and performance uncertainty which, when combined with low manufacturing precision requirements, makes them particularly suitable for emerging nanoelectronics. We propose a hybrid eNano-CMOS perturbation-based computing platform relying on a new style of configurability that exploits the computational model's unique form of unstructured redundancy. We consider the practicality and scalability of perturbation-based computational models by developing and assessing initial foundations for engineering such systems. Specifically, new design and decomposition principles exploiting task specific contextual and temporal scales are proposed and shown to substantially reduce complexity for several benchmark tasks. Our results provide strong evidence for the relevance and potential of this class of computational models when targeted at emerging unreliable nanoelectronics.Item Flexible and efficient reliability in memory systems(2011-05) Yoon, Doe Hyun; Erez, Mattan; Patt, Yale N.; Touba, Nur A.; Chiou, Derek; Li, JianFuture computing platforms will increasingly demand more stringent memory resiliency mechanisms due to shrinking memory cell size, reduced error margins, higher capacity, and higher reliability expectations. Traditional mechanisms, which apply error checking and correcting (ECC) codes uniformly across all memory locations, are inefficient -- Uniform protection dedicates resources to redundant information and demand higher cost for stronger protection, a fixed (worst-case based) error tolerance level, and a fixed access granularity. The design of modern computing platforms is a multi-objective optimization, balancing performance, reliability, and many other parameters within a constrained power budget. If resiliency mechanisms consume too many resources, we lose an opportunity to improve performance. Hence, it is important and necessary to enable more efficient and flexible memory resiliency mechanisms. This dissertation develops techniques that enable efficient, adaptive, and dynamically tunable memory resiliency mechanisms. First, we develop two-tiered protection, apply it to the last-level cache, and present Memory Mapped ECC (MME) and ECC FIFO. Two-tiered protection provides low-cost error detection or light-weight correction in the common case read operations, while the uncommon case error correction overhead is off-loaded to main memory namespace. MME and ECC FIFO use different schemes for managing redundant information in main memory. Both achieve 15-25% reduction in area and 9-18% reduction in power consumption of the last-level cache, while performance is degraded by only 0.7% on average. Then, we apply two-tiered protection to main memory and augment the virtual memory interface to dynamically adapt error tolerance levels according to user, system, and environmental needs. This mechanism, Virtualized ECC (V-ECC), improves system energy efficiency by 12% and degrades performance only by 1-2% for chipkill-correct level protection. V-ECC also supports ECC in a system with no dedicated storage for redundant information. Lastly, we propose the adaptive granularity memory system (AGMS) that allows different access granularities, while supporting ECC. By not wasting off-chip bandwidth for transferring unnecessary data, AGMS achieves higher throughput (by 44%) and power efficiency (by 46%) in a 4-core CMP system. Furthermore, AGMS will provide further gains in future systems, where off-chip bandwidth will be comparatively scarce.Item Investigating the construct of ADHD: issues related to factor structure in Korean students(Texas Tech University, 2005-05) Lee, Jeong Rim; Stevens, Tara; Lan, William; Mulsow, MiriamThe purpose of the present study was to accomplish three tasks. The first task was to examine the reliability and validity of a diagnostic tool for identifying children with Attention Deficit Hyperactivity Disorder (ADHD) in a Korean population, as described in the Diagnostic and Statistical Manual of Mental Disorders-IV-Text Revision (DSM-IV-TR). Evidence of reliable and valid scores, based on DSM-IV-TR diagnostic definitions of ADHD, was necessary to accomplish the other two tasks. The second purpose of the study was to explore whether the current version of the DSM-IV-TR, which consists of two dimensions of inattention and hyperactivity ¨Cimpulsivity, was appropriate for describing the psychological and behavioral problems of Korean children with ADHD. The third purpose of the study was to examine gender differences in the factor structures of the DSM-IV-TR in Korea, between boys and girls with ADHD. The DSM-IV-TR is the most commonly used manual in the United States to identify students with ADHD. Although DSM-IV-TR criteria have been used in research on ADHD with Korean school-age children, psychometric characteristics of ADHD criteria described in the DSM-IV-TR have not been examined. This missing information is imperative for quality research. The DSM-IV-TR used in this study contains 18 ADHD criteria for children's problematic behaviors manifested in inattention and hyperactivity¨Cimpulsivity. A questionnaire distributed to 48 elementary school teachers asked them to rate their students¡¯ behaviors. The questionnaire was a 5-point scale to indicate the degree of severity of the problems the teachers experienced with the students. A total of 1,663 children, 904 males and 759 females, from grades one to six in eight elementary schools located in three cities in South Korea were rated. One way to show evidence of a valid score by the diagnostic definition of ADHD described in the DSM-IV-TR is to show that the measures generated from the DSM-IV-TR are related to results of other tools that measure the same or similar variables. To demonstrate the concurrent validity of the DSM-IV-TR criteria, the author also administered the Attention-Deficit Hyperactivity Disorder Test (ADHDT), another tool measuring ADHD. Another way to show evidence of valid scores of the diagnostic symptoms of ADHD based on the DSM-IV-TR is to reveal that they were exactly measuring traits related to behavioral and psychological characteristics of ADHD. To demonstrate the construct validity of the DSM-IV-TR criteria, the author tried to discover evidence shown by previous studies. Previous studies related to ADHD have documented that individuals with ADHD have frequently been found to have comorbid Oppositional Defiant Disorder (ODD) and to experience more disciplinary and peer problems. As a result, to support the evidence of construct validity of the ADHD rating scale based on the DSM-IV-TR, measurement of ODD in the DSM-IV-TR and questions asking about disciplinary problems and peer problems were used. The author has completed the preliminary analysis on reliability of the variables. For the data analysis, scores of reliability and validity of the diagnostic definition of ADHD as described in DSM-IV-TR were examined by using Pearson correlation coefficient, Cronbach alpha, and Confirmatory Factor Analysis (CFA). CFA is an appropriate statistical method to answer questions on the appropriateness of the factorial structure of ADHD in the DSM-IV-TR and the gender difference in the configural structure between boys and girls. Scores associated with the diagnostic definition of ADHD as described in the DSM-IV-TR in a Korean population turned out to be internally stable and valid from teachers' reports. Next, findings from CFA showed that both the two-factor (inattention and hyperactivity/impulsivity) and the three-factor model (inattention, hyperactivity and impulsivity) of ADHD fit the data well. However, the three-factor model showed slightly higher scores in NFI, TLI, and CFI values and slightly lower scores in RMSEA value. Last, CFA exploring the differences in factor structure across gender revealed that the three-factor model of ADHD fit the data well for boys in all the sample sizes. However, it fit the data well for girls in only the whole population group that considered the values of NFI, TLI, and CFI, but not RMSEA. The three-factor model of ADHD appeared to be the best fit to the data in Korean elementary boys but only satisfied the three incremental indices, NFI, TLI, and CFI values, in the girls' group. Factor structures of ADHD need to be explained under theoretical assumptions. Barkley's (1997) recently developed hybrid neuropsychological model has been accepted as a unifying way to explain the nature of ADHD. The DSM-IV-TR as a tool to diagnose ADHD was discussed from the perspective of Barkley's hybrid model.Item Investigating the construct of ADHD: Issues related to factor structure in Korean students(2005-05) Lee, Jeong Rim; Stevens, Tara; Lan, William; Mulsow, MiriamThe purpose of the present study was to accomplish three tasks. The first task was to examine the reliability and validity of a diagnostic tool for identifying children with Attention Deficit Hyperactivity Disorder (ADHD) in a Korean population, as described in the Diagnostic and Statistical Manual of Mental Disorders-IV-Text Revision (DSM-IV-TR). Evidence of reliable and valid scores, based on DSM-IV-TR diagnostic definitions of ADHD, was necessary to accomplish the other two tasks. The second purpose of the study was to explore whether the current version of the DSM-IV-TR, which consists of two dimensions of inattention and hyperactivity-impulsivity, was appropriate for describing the psychological and behavioral problems of Korean children with ADHD. The third purpose of the study was to examine gender differences in the factor structures of the DSM-IV-TR in Korea, between boys and girls with ADHD. The DSM-IV-TR is the most commonly used manual in the United States to identify students with ADHD. Although DSM-IV-TR criteria have been used in research on ADHD with Korean school-age children, psychometric characteristics of ADHD criteria described in the DSM-IV-TR have not been examined. This missing information is imperative for quality research. The DSM-IV-TR used in this study contains 18 ADHD criteria for children's problematic behaviors manifested in inattention and hyperactivity-impulsivity. A questionnaire distributed to 48 elementary school teachers asked them to rate their students' behaviors. The questionnaire was a 5-point scale to indicate the degree of severity of the problems the teachers experienced with the students. A total of 1,663 children, 904 males and 759 females, from grades one to six in eight elementary schools located in three cities in South Korea were rated. One way to show evidence of a valid score by the diagnostic definition of ADHD described in the DSM-IV-TR is to show that the measures generated from the DSM-IV-TR are related to results of other tools that measure the same or similar variables. To demonstrate the concurrent validity of the DSM-IV-TR criteria, the author also administered the Attention-Deficit Hyperactivity Disorder Test (ADHDT), another tool measuring ADHD. Another way to show evidence of valid scores of the diagnostic symptoms of ADHD based on the DSM-IV-TR is to reveal that they were exactly measuring traits related to behavioral and psychological characteristics of ADHD. To demonstrate the construct validity of the DSM-IV-TR criteria, the author tried to discover evidence shown by previous studies. Previous studies related to ADHD have documented that individuals with ADHD have frequently been found to have comorbid Oppositional Defiant Disorder (ODD) and to experience more disciplinary and peer problems. As a result, to support the evidence of construct validity of the ADHD rating scale based on the DSM-IV-TR, measurement of ODD in the DSM-IV-TR and questions asking about disciplinary problems and peer problems were used. The author has completed the preliminary analysis on reliability of the variables. For the data analysis, scores of reliability and validity of the diagnostic definition of ADHD as described in DSM-IV-TR were examined by using Pearson correlation coefficient, Cronbach alpha, and Confirmatory Factor Analysis (CFA). CFA is an appropriate statistical method to answer questions on the appropriateness of the factorial structure of ADHD in the DSM-IV-TR and the gender difference in the configural structure between boys and girls. Scores associated with the diagnostic definition of ADHD as described in the DSM-IV-TR in a Korean population turned out to be internally stable and valid from teachers' reports. Next, findings from CFA showed that both the two-factor (inattention and hyperactivity/impulsivity) and the three-factor model (inattention, hyperactivity and impulsivity) of ADHD fit the data well. However, the three-factor model showed slightly higher scores in NFI, TLI, and CFI values and slightly lower scores in RMSEA value. Last, CFA exploring the differences in factor structure across gender revealed that the three-factor model of ADHD fit the data well for boys in all the sample sizes. However, it fit the data well for girls in only the whole population group that considered the values of NFI, TLI, and CFI, but not RMSEA. The three-factor model of ADHD appeared to be the best fit to the data in Korean elementary boys but only satisfied the three incremental indices, NFI, TLI, and CFI values, in the girls' group. Factor structures of ADHD need to be explained under theoretical assumptions. Barkley's (1997) recently developed hybrid neuropsychological model has been accepted as a unifying way to explain the nature of ADHD. The DSM-IV-TR as a tool to diagnose ADHD was discussed from the perspective of Barkley's hybrid model.
- «
- 1 (current)
- 2
- 3
- »