Browsing by Subject "Multiprocessors"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item A high level protocol for a loosely coupled multiprocessing environment(Texas Tech University, 1986-08) Wey, Yih-shyanNot availableItem Part and tool scheduling rules for a flexible manufacturing system(Texas Tech University, 1983-12) Acree, Elaine StrongScheduling rules for a specific general purpose Flexible Manufacturing System were investigated. The system consists of six general purpose machines with local storage at each machine, a work-in-process queue and a material handling cart. The primary purpose of this research was to investigate effects and interactions of three types of scheduling rules on the FMS performance. The rules included part scheduling on the machines and two resource allocation rules for tool scheduling and cart scheduling. In addition the part scheduling rule was modified with a tool look ahead rule to minimize tool delay. Cue to the numerous similarities in the scheduling and resource allocation problems of computer operating systems and the FMS, some techniques from operating systems were applied to the FMS tc reduce thrashing, prevent deadlocks and increase cart utilization efficiency, A simulation model was developed to investigate the scheduling rules. SLAM was the language used to simulate this system. The main performance criteria used in this model were machine utilization, cart utilization, total time in system, number of finished parts and number of completed stages. The most important result of this research was the discovery that individual tool allocation is superior to total tool allocation. For the system studied, the performance measurements were insensitive to the part and cart scheduling rules. However a severe reduction in cart speed caused the system to thrash. The reduction of cart speed provided the proper experimental conditions to prove that SDTF can be superior to ECFS for this system operating under thrashing conditions. The tool look ahead feature provided a gross improvement in machine utilization and a reduction in time waiting for tools. However this improvement was not statistically significant in all cases.Item Reconfiguration strategies in multiprocessor systems(Texas Tech University, 1984-05) Ratheal, Steve WeldonNot availableItem Stochastic petri nets applied to systematic performance evaluation of static allocation schemes in heterogeneous computing environments(Texas Tech University, 1996-05) McSpadden, Albert R.Not availableItem A technology-scalable composable architecture(2007) Kim, Changkyu; Burger, Douglas C., Ph. D.Clock rate scaling can no longer sustain computer system performance scaling due to power and thermal constraints and diminishing performance returns of deep pipelining. Future performance improvements must therefore come from mining concurrency from applications. However, increasing global on-chip wire delays will limit the amount of state available in a single cycle, thereby hampering the ability to mine concurrency with conventional approaches. To address these technology challenges, the processor industry has migrated to chip multiprocessors (CMPs). The disadvantage of conventional CMP architectures, however, is their relative inflexibility to meet the wide range of application demands and operating targets that now exist. The granularity (e.g., issue width), the number of processors in a chip and memory hierarchies are fixed at design time based on the target workload mix, which result in suboptimal operation as the workload mix and operating targets change over time. In this dissertation, we explore the concept of composability to address both the increasing wire delay problem and the inflexibility of conventional CMP architectures. The basic concept of composability is the ability to dynamically adapt to diverse applications and operating targets, both in terms of granularity and functionality, by aggregating finegrained processing units or memory units. First, we propose a composable on-chip memory substrate, called Non-Uniform Access Cache Architecture (NUCA) to address increasing on-chip wire delay for future large caches. The NUCA substrate breaks large on-chip memories into many fine-grained memory banks that are independently accessible, with a switched network embedded in the cache. Lines can be mapped into this array of memory banks with fixed mappings or dynamic mappings, where cache lines can move around within the cache to further reduce the average cache hit latency. Second, we evaluate a range of strategies to build a composable processor. Composable processors provide flexibility of adapting the granularity of processors to various application demands and operating targets, and thus choose the hardware configurations best suited to any given point. A composable processor consists of a large number of lowpower, fine-grained processor cores that can be aggregated dynamically to form more powerful logical processors. We present architectural innovations to support composability in a power- and area-efficient manner.