Browsing by Subject "System design"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item Analysis of stress singularity of adhered contacts in MEMS(Texas Tech University, 2004-08) Chakkarapani, VenkatasubbaraoMEMS devices are usually multimaterial systems where interfaces are formed at the junction of two materials. Failure occurs at adhered contacts because of biomaterial stress singularities at interface comers. Magnitude of the stress field induced due to this singularity is given by the value of the notch stress intensity. Hence it becomes very important to design MEMS devices based on the stress intensity-fracture toughness failure criterion. Inherent uncertainty of design parameters (which includes singularity parameters) in MEMS devices necessitates probabilistic design rather than deterministic design. The probabilistic design of MEMS devices, with a microswitch as our device example, has been performed to find the probability of failure of the switch based on stress intensity-fracture toughness failure criterion. The two main objectives of this research are to determine the stress field around a bimaterial singularity for a given bimaterial specimen and evaluate the probability of failure based on stress intensity-fracture toughness failure criterion using probabilistic analysis. The scope of work is fourfold. First, the order of the singularity is determined using two different methods, namely, Complex potential method and Airy stress function method. The equivalence of these methods is verified. Second, the influence coefficients are determined using analytical methods. Third, the stress intensity factor is determined using finite element methods. Fourth, the probabilistic analysis of the microswitch is performed based on stress intensity-fracture toughness failure criterion. The order of singularity has been determined to be 0.512 and 0.696. The stress intensity factor has been determined to be 0.7708 MPa m0.488 from finite element analysis. The probability that the notch stress intensity exceeds the fracture toughness is found to be 0.612.Item Antecedents to systems development: beliefs of information systems specialists and users(Texas Tech University, 1994-12) Havelka, DouglasThis research project was undertaken to investigate an area of information systems development that had received sparse attention: users' and IS specialists' beliefs that may influence their interaction during the information requirements determination process. A general User/IS Specialist Interaction Framework was presented that postulates that the interaction between users and IS specialists during information systems development can be conceptualized as a set of behaviors. These behaviors are derived from: (1) users' and IS specialists' beliefs toward these behaviors; and (2) extemal factors that moderate the intended behaviors. In essence, the beliefs and extemal factors are antecedents to user/IS specialist interaction during development. Based on these propositions, a set of research questions directed at discovering the beliefs of users and IS specialists and the differences between these beliefs toward the information requirements determination process was developed. A two-stage empirical study was conducted to address these questions. The results of this work can be summarized as follows. First, it appears that users as a group, despite differences in experience, training, etc., do have a common set of beliefs toward the critical productivity factors influencing the information requirements determination process. Second, it appears that IS specialists as a group, despite differences in experience, training, and methods used, also have a common set of beliefs toward the critical productivity factors influencing information requirements determination. Third, the empirical evidence suggests that overall users and IS specialists disagree with regard to the relative level of importance of the critical productivity factors identified. Fourth, it appears that the beliefs of users toward the relative importance of several of the critical productivity factors for information requirements determination are significantly related to their level of involvement with the developed system. Fifth, it does not appear that the beliefs of users toward the relative importance of critical productivity factors for infomiation requirements determination are necessarily significantly related to their level of satisfaction with the developed system.Item Automatic workload synthesis for early design studies and performance model validation(2005) Bell, Robert Henry; John, Lizy KurianComputer designers rely on simulation systems to assess the performance of their designs before the design is transferred to silicon and manufactured. Simulators are used in early design studies to obtain projections of performance and power over a large space of potential designs. Modern simulation systems can be four orders of magnitude slower than native hardware execution. At the same time, the numbers of applications and their dynamic instruction counts have expanded dramatically. In addition, simulation systems need to be validated against cycle-accurate models to ensure accurate performance projections. In prior work, long running applications are used for early design studies while hand-coded microbenchmarks are used for performance model validation. One proposed solution for early design studies is statistical simulation, in which statistics from the workload characterization of an executing application are used to create a synthetic instruction trace that is executed on a fast performance simulator. In prior work, workload statistics are collected as average behaviors based on instruction types. In the present research, statistics are collected at the granularity of the basic block. This improves the simulation accuracy of individual instructions. The basic block statistics form a statistical flow graph that provides a reduced representation of the application. The synthetic trace generated from a traversal of the flow graph is combined with memory access models, branching models and novel program synthesis techniques to automatically create executable code that is useful for performance model validation. Runtimes for the synthetic versions of the SPEC CPU, STREAM, TPC-C and Java applications are orders of magnitude faster than the runtimes of the original applications with performance and power dissipation correlating to within 2.4% and 6.4%, respectively, on average. The synthetic codes are portable to a variety of platforms, permitting validations between diverse models and hardware. Synthetic workload characteristics can easily be modified to model different or future workloads. The use of statistics abstracts proprietary code, encouraging code sharing between industry and academia. The significantly reduced execution times consolidate the traditionally disparate workloads used for early design studies and model validation.Item Beleaf : an earth-friendly solution to disposable dinnerware(2011-05) Adhikary, Amrita Prasad; Hall, Peter, 1965-; Olsen, Daniel M., 1963-This report is a documentation of an investigative design process that looks at how small shifts in established systems can be reconfigured to make big changes. It is an attempt at establishing a framework for designing sustainable solutions with the environment and social good in mind. In addressing the problems resulting from our indiscriminate use of plastic disposable dinnerware and offering a viable and earth-friendly system solution to the same, I am interested in reminding fellow designers that accountability towards the environment is the new design reality. The report advocates methods that synthesize design for people, profit, and most importantly, the planet. By using plates made from fallen leaves, the user fulfills his specific need for disposable dinnerware while simultaneously participating in an environmental task of closing the loop through responsible disposal and composting.Item Concurrency modeling extensions to the Fusion development methodology(Texas Tech University, 1997-05) Wenzel, Peter W.The "Fusion" software development methodology is a self-claimed second-generation full-coverage development method for object-oriented software covering the traditional analysis, design, and implementation phases as well as providing management tools for software development. Fusion's deficiency is its lack of support for concurrency modeling which is essential in the problem domains of all real-time systems. With this one exception. Fusion is an excellent example of a fully integrated object-oriented development methodology, combining the best of several first-generation object-oriented analysis and design (OOAD) methods. The Fusion development methodology may be extended by integrating concurrency modeling into the method, making it more suitable for real-time problem domains. The goals of this thesis are threefold: (1) identify the requirements for modeling concurrency in object-oriented systems, (2) propose extensions to the Fusion object-oriented method for modeling concurrency, and (3) demonstrate the proposed concurrency modeling extensions via a case study. The thesis identifies basic object-oriented concurrency modeling requirements by examining existing concurrency modeling techniques. These requirements are then used to form highly integrated concurrency modeling extensions to the Fusion object-oriented development methodology. Finally, the Fusion concurrency modeling extensions are demonstrated using the telecommunications real-time problem domain of cellular digital packet data (CDPD).Item Defect detection, design comprehension, and improved productivity with software gauges(Texas Tech University, 1996-12) Butsch, David C.The cost of software development and maintenance is expensive. Therefore, management pushes to cut cost up front by building the product as rapidly as possible. Defect prevention and removal strategies add cost during the development phase but return the investment during the maintenance phase. Rapid development without concern for defect prevention and removal is simply confusing speed with progress. Too much focus on defect free software without concern for the schedule is unrealistic. Therefore, a balanced approach to the development of reliable software is the goal. This thesis considers an approach to defect prevention and removal through the use of software gauges. These gauges offer visibility into software design, automation of lessons learned and improved productivity.Item Detecting and tolerating faults in distributed systems(2008-12) Ogale, Vinit Arun, 1979-; Garg, Vijay K. (Vijay Kumar), 1963-This dissertation presents techniques for detecting and tolerating faults in distributed systems. Detecting faults in distributed or parallel systems is often very difficult. We look at the problem of determining if a property or assertion was true in the computation. We formally define a logic called BTL that can be used to define such properties. Our logic takes temporal properties in consideration as these are often necessary for expressing conditions like safety violations and deadlocks. We introduce the idea of a basis of a computation with respect to a property. A basis is a compact and exact representation of the states of the computation where the property was true. We exploit the lattice structure of the computation and the structure of different types of properties and avoid brute force approaches. We have shown that it is possible to efficiently detect all properties that can be expressed by using nested negations, disjunctions, conjunctions and the temporal operators possibly and always. Our algorithm is polynomial in the number of processes and events in the system, though it is exponential in the size of the property. After faults are detected, it is necessary to act on them and, whenever possible, continue operation with minimal impact. This dissertation also deals with designing systems that can recover from faults. We look at techniques for tolerating faults in data and the state of the program. Particularly, we look at the problem where multiple servers have different data and program state and all of these need to be backed up to tolerate failures. Most current approaches to this problem involve some sort of replication. Other approaches based on erasure coding have high computational and communication overheads. We introduce the idea of fusible data structures to back up data. This approach relies on the inherent structure of the data to determine techniques for combining multiple such structures on different servers into a single backup data structure. We show that most commonly used data structures like arrays, lists, stacks, queues, and so on are fusible and present algorithms for this. This approach requires less space than replication without increasing the time complexities for any updates. In case of failures, data from the back up and other non-failed servers is required to recover. To maintain program state in case of failures, we assume that programs can be represented by deterministic finite state machines. Though this approach may not yet be practical for large programs it is very useful for small concurrent programs like sensor networks or finite state machines in hardware designs. We present the theory of fusion of state machines. Given a set of such machines, we present a polynomial time algorithm to compute another set of machines which can tolerate the required number of faults in the system.Item Development of an object-oriented CASE tool with detailed design information(Texas Tech University, 1997-08) Shi, Yan-WeiThe research resulted in the creation of an object-oriented tool for CASE that includes detailed design information- The system will include three major functions: a Class Librarian for keeping tracking all classes available for reuse, a Class Browser for accessing and modifying the details (members and functions) of the classes, and a Graphic Design Tool for developing graphic diagram of an object model. The tool will support the notations and strategies of the Object-Modeling Technique (OMT) as presented by James Rumbaugh and co-workers. It will also facilitate detailed system design which involves specification algorithms and concrete data structures for the software system. The classes created in the system will be used to produce STRIDES™ header file and template STRIDES™ source code. The design of the user interface was a graphic and user-friendly atmosphere that emphasizes a Windows-oriented event-driven point-and-click facility. The tool was implemented in Microsoft Visual 0++ for user interface and Microsoft Access for data management.Item Essays of new information systems design and pricing for supporting information economy(2005) Fang, Fang; Whinston, Andrew B.Item Essays on market-based information systems design and e-supply chain(2005-12) Guo, Zhiling, 1974-; Whinston, Andrew B.Item HyPerModels: hyperdimensional performance models for engineering design(2005) Turner, Cameron John; Crawford, Richard H.Engineering design is an iterative process where the designer determines an appropriate set of design variables and cycle parameters so as to achieve a set of performance index goals. The relationships between design variables, cycle parameters and performance indices define the design space, a hyperdimensional representation of possible designs. To represent the design space, engineers employ metamodels, a technique that builds approximate or surrogate models of other models. Metamodels may be constructed from a wide variety of mathematical basis functions but Hyperdimensional Performance Models (HyPerModels) derived from Non-Uniform Rational Bsplines (NURBs) offer many unique advantages when compared to other metamodeling approaches. NURBs are defined by a set of control points, knot vectors and the NURBs orders, resulting in a highly robust and flexible curve definition that has become the de facto computer graphics standard. The defining components of a NURBs HyPerModel can be used to define adaptive sequential sampling algorithms that allow the designer to efficiently survey the design space for interesting regions. The data collected from design space surveys can be represented with a HyPerModel by adapting NURBs fitting algorithms, originally developed for computer graphics, to address the unique challenges of representing a hyperdimensional design space. With a HyPerModel representation, visualization of the design space or design subspaces such as the Pareto subspace is possible. HyPerModels support design space analysis for adaptive sequential sampling algorithms, to detect robust design space regions or for fault detection by comparing multiple HyPerModels obtained from the same system. Significantly, HyPerModels uniquely allow multi-start optimization algorithms to locate the global metamodel optimum in finite time. Each of these capabilities is demonstrated with demonstration problems including brushless DC motor fault detection and composite material I-beam and gas turbine engine design problems with the HyPerMaps software package. HyPerMaps defines the necessary algorithms to adaptively sample a design space, construct a HyPerModel and to use a HyPerModel for visualization, analysis or optimization. With HyPerMaps, an engineering designer has a window into the hyperdimensional design space, allowing the designer to explore the design space for undiscovered design variable combinations with superior performance capabilities.Item Methodology for creating human-centered robots : design and system integration of a compliant mobile base(2012-05) Wong, Pius Duc-min; Sentis, Luis; Deshpande, AshishRobots have growing potential to enter the daily lives of people at home, at work, and in cities, for a variety of service, care, and entertainment tasks. However, several challenges currently prevent widespread production and use of such human-centered robots. The goal of this thesis was first to help overcome one of these broad challenges: the lack of basic safety in human-robot physical interactions. Whole-body compliant control algorithms had been previously simulated that could allow safer movement of complex robots, such as humanoids, but no such robots had yet been documented to actually implement these algorithms. Therefore a wheeled humanoid robot "Dreamer" was developed to implement the algorithms and explore additional concepts in human-safe robotics. The lower mobile base part of Dreamer, dubbed "Trikey," is the focus of this work. Trikey was iteratively developed, undergoing cycles of concept generation, design, modeling, fabrication, integration, testing, and refinement. Test results showed that Trikey and Dreamer safely performed movements under whole-body compliant control, which is a novel achievement. Dreamer will be a platform for future research and education in new human-friendly traits and behaviors. Finally, this thesis attempts to address a second broad challenge to advancing the field: the lack of standard design methodology for human-centered robots. Based on the experience of building Trikey and Dreamer, a set of consistent design guidelines and metrics for the field are suggested. They account for the complex nature of such systems, which must address safety, performance, user-friendliness, and the capability for intelligent behavior.Item Real-time process and control simulation(Texas Tech University, 1996-08) Shah, Deval VipinchandraThis work addresses the problem of easily developing a controller for a complex process. The development of the controller necessitates modeling and simulation of the process using a simulation language which helps reduce the gap between simulation studies and field realization. An example process, the flash tank chemical process, was chosen to make a comparison among the three common languages used for simulation, MATLAB, C and C++. The nonlinear practical process was simulated in these languages. An object-oriented model in C++ was developed for the process. The increased functionality of MATLAB with the use of the MATLAB Compiler and the C Math Library was explored for this process. It was found that a language for simulation should be chosen depending on the priorities in developing the controller. Due to a direct low level implementation, C language can help create more efficient code at the processor level. Due to a better representation of the process in the program, an object-oriented approach using C++ language can help in frequent modifications in the model of a process. However, the code in C or C++ can become lengthy and difficult to program. With the help of built-in functions in MATLAB, MATLAB can help reduce the time for developing a controller.Item Refinement of the requirement definition concept in system development(Texas Tech University, 1998-12) White, Michelle MayRequirement definition is an integral part of system development and is a factor that contributes to the success of system development. Throughout the literature there are documented problems and issues with respect to defining requirements for system development. There is evidence to indicate that requirement definition is often not performed well, not communicated effectively, and is lacking measures. A comprehensive view of requirement definition is missing throughout the literature. The objective of the research was to refine the requirement definition concept during system development by developing a comprehensive representation of the concept that is organized, coherent, unified and measurable. The researcher surveyed literature for existing information regarding requirement definition, such as requirement processes, models, issues, case studies, and measures. The researcher generated methods to synthesize and refine this information. Since a consensus or comprehensive requirement definition representation was not found in the literature, the researcher developed methods to determine the areas of requirement definition. Methods were developed by the researcher to determine the primary functions performed in each requirement definition area and the area interactions. The researcher developed measures for each area of requirement definition and these were used by the researcher to generate a requirement definition assessment audit to compare a project's requirement definition effort to the comprehensive requirement definition representation generated in this research. The assessment audit provides feedback to a project regarding strengths and weaknesses of the requirement definition effort.Item Wireless transceiver for the TLL5000 platform : an exercise in system design(2009-12) Perkey, Jason Cecil; Gharpurey, Ranjit; McDermott, MarkThis paper will present the hardware system design, development, and plan for implementation of a wireless transceiver for The Learning Labs 5000 (TLL5000) educational platform. The project is a collaborative effort by Vanessa Canac, Atif Habib, and Jason Perkey to design and implement a complete wireless system including physical hardware, physical layer (PHY-layer) modulation and filters, error correction, drivers and user-interface software. While there are a number of features available on the TLL5000 for a wide variety of applications, there is currently no system in place for transmitting data wirelessly from one circuit board to another. The system proposed in this report is comprised of an external transceiver that communicates with a software application running on the TLL-SILC 6219 ARM9 processor that is interfaced with the TLL5000 baseboard. The details of a reference design, the hardware from the GNU Radio project, are discussed as a baseline and source of information. The state of the project and hardware design is presented as well as the specific portions of the project to which Jason Perkey made significant contributions.