Browsing by Subject "Scheduling"
Now showing 1 - 20 of 25
Results Per Page
Sort Options
Item A Process Integration Approach to the Strategic Design and Scheduling of Biorefineries(2011-02-22) Elms, Rene ?DavinaThis work focused upon design and operation of biodiesel production facilities in support of the broader goal of developing a strategic approach to the development of biorefineries. Biodiesel production provided an appropriate starting point for these efforts. The work was segregated into two stages. Various feedstocks may be utilized to produce biodiesel, to include virgin vegetable oils and waste cooking oil. With changing prices, supply, and demand of feedstocks, a need exists to consider various feedstock options. The objective of the first stage was to develop a systematic procedure for scheduling and operation of flexible biodiesel plants accommodating a variety of feedstocks. This work employed a holistic approach and combination of process simulation, synthesis, and integration techniques to provide: process simulation of a biodiesel plant for various feedstocks, integration of energy and mass resources, optimization of process design and scheduling, and techno-economic assessment and sensitivity analysis of proposed schemes. An optimization formulation was developed to determine scheduling and operation for various feedstocks and a case study solved to illustrate the merits of the devised procedure. With increasing attention to the environmental impact of discharging greenhouse gases (GHGs), there has been growing public pressure to reduce the carbon footprint associated with fossil fuel use. In this context, one key strategy is substitution of fossil fuels with biofuels such as biodiesel. Design of biodiesel plants has traditionally been conducted based on technical and economic criteria. GHG policies have the potential to significantly alter design of these facilities, selection of feedstocks, and scheduling of multiple feedstocks. The objective of the second stage was to develop a systematic approach to design and scheduling of biodiesel production processes while accounting for the effect of GHG policies. An optimization formulation was developed to maximize profit of the process subject to flowsheet synthesis and performance modeling equations. The carbon footprint is accounted for through a life cycle analysis (LCA). The objective function includes a term reflecting the impact of the LCA of a feedstock and its processing to biodiesel. A multiperiod approach was used and a case study solved with several scenarios of feedstocks and GHG policies.Item Adaptive Power Control for Single and Multiuser Opportunistic Systems(2010-07-14) Nam, Sung SikIn this dissertation, adaptive power control for single and multiuser opportunistic systems is investigated. First, a new adaptive power-controlled diversity combining scheme for single user systems is proposed, upon which is extended to the multiusers case. In the multiuser case, we first propose two new threshold based parallel multiuser scheduling schemes without power control. The first scheme is named on-off based scheduling (OOBS) scheme and the second scheme is named switched based scheduling (SBS) scheme. We then propose and study the performance of thresholdbased power allocation algorithms for the SBS scheme. Finally, we introduce a unified analytical framework to determine the joint statistics of partial sums of ordered RVs with i.i.d. and then the impact of interference on the performance of parallel multiuser scheduling is investigated based on our unified analytical framework.Item Control-friendly scheduling algorithms for multi-tool, multi-product manufacturing systems(2011-12) Bregenzer, Brent Constant; Qin, Joe; Hasenbein, John J.; Edgar, Thomas F.; Hwang, Gyeong S.; Kutanoglu, Erhan; Bonnecaze, Roger T.The fabrication of semiconductor devices is a highly competitive and capital intensive industry. Due to the high costs of building wafer fabrication facilities (fabs), it is expected that products should be made efficiently with respect to both time and material, and that expensive unit operations (tools) should be utilized as much as possible. The process flow is characterized by frequent machine failures, drifting tool states, parallel processing, and reentrant flows. In addition, the competitive nature of the industry requires products to be made quickly and within tight tolerances. All of these factors conspire to make both the scheduling of product flow through the system and the control of product quality metrics extremely difficult. Up to now, much research has been done on the two problems separately, but until recently, interactions between the two systems, which can sometimes be detrimental to one another, have mostly been ignored. The research contained here seeks to tackle the scheduling problem by utilizing objectives based on control system parameters in order that the two systems might behave in a more beneficial manner. A non-threaded control system is used that models the multi-tool, multi-product process in a state space form, and estimates the states using a Kalman filter. Additionally, the process flow is modeled by a discrete event simulation. The two systems are then merged to give a representation of the overall system. Two control system matrices, the estimate error covariance matrix from the Kalman filter and a square form of the system observability matrix called the information matrix, are used to generate several control-based scheduling algorithms. These methods are then tested against more tradition approaches from the scheduling literature to determine their effectiveness on both the basis of how well they maintain the outputs near their targets and how well they minimize the cycle time of the products in the system. The two metrics are viewed simultaneously through use of Pareto plots and merits of the various scheduling methods are judged on the basis of Pareto optimality for several test cases.Item Data-driven modeling and optimization of sequential batch-continuous process(2016-05) Park, Jungup; Edgar, Thomas F.; Baldea, Michael; Djurdjanovic, Dragan; Rochelle, Gary T; Truskett, Thomas MDriven by the need to lower capital expenditures and operating costs, as well as by competitive pressure to increase product quality and consistency, modern chemical processes have become increasingly complex. These trends are manifest, on the one hand, in complex equipment configurations and, on the other hand, in a broad array of sensors (and control systems), which generate large quantities of operating data. Of particular interest is the combination of two traditional routes of chemical processing: batch and continuous. Batch to continuous processes (B2C), which constitute the topic of this dissertation, comprise of a batch section, which is responsible for preparing the materials that are then processed in the continuous section. In addition to merging the modeling, control and optimization approaches related to the batch and continuous operating paradigms --which are radically different in many aspects-- challenges related to analyzing the operation of such processes arise from the multi-phase flow. In particular, we will be considering the case where a particulate solid is suspended in a liquid ``carrier'', in the batch stage, and the two-phase mixture is conveyed through the continuous stage. Our explicit goal is to provide a complete operating solution for such processes, starting with the development of meaningful and computationally efficient mathematical models, continuing with a control and fault detection solution, and finally, a production scheduling concept. Owing to process complexity, we reject out of hand the use of first-principles models, which are inevitably high dimensional and computationally expensive, and focus on data-driven approaches instead. Raw data obtained from chemical industry are subject to noise, equipment malfunction and communication failures and, as such, data recorded in process historian databases may contain outliers and measurement noise. Without proper pretreatment, the accuracy and performance of a model derived from such data may be inadequate. In the next chapter of this dissertation, we address this issue, and evaluate several data outlier removal techniques and filtering methods using actual production data from an industrial B2C system. We also address a specific challenge of B2C systems, that is, synchronizing the timing of the batch data need with the data collected from the continuous section of the process. Variable-wise unfolded data (a typical approach for batch processes) exhibit measurement gaps between the batches; however, this type of behavior cannot be found in the subsequent continuous section. These data gaps have an impact on data analysis and, in order to address this issue, we provide a method for filling in the missing values. The batch characteristic values are assigned in the gaps to match the data length with the continuous process, a procedure that preserves meaningful process correlations. Data-driven modeling techniques such as principal component analysis (PCA) and partial least squares (PLS) regression are well-established for modeling batch or continuous processes. In this thesis, we consider them from the perspective of the B2C systems under consideration. Specific challenges that arise during modeling of these systems are related to nonlinearity, which, in turn, is due to multiple operating modes associated with different product types/product grades. In order to deal with this, we propose partitioning the gap-filled data set into subsets using k-means clustering. Using the clustering method, a large data set that reflects multiple operating modes and the associated nonlinearity can be broken down into subsets in which the system exhibits a potentially linear behavior. Also, in order to further increase the model accuracy, the inputs to the model need to be refined. Unrelated variables may corrupt the resulting model by introducing unnecessary noise and irrelevant information. By properly eliminating any uninformative variables, the model performance can be improved along with the interpretability. We use variable selection methods to investigate the model coefficients or variable importance in projection (VIP) values to determine the variables to retain in the model. Developing a model to estimate the final product quality poses different challenges. Measuring and quantifying the final product quality online can be limited due to physical and economic constraints. Physically, there are some quantities that cannot be measured due to sensor sizes or surrounding environments. Economically, the offline ``lab'' measurements may lead to destroying the sample used for the testing. These constraints lead to multiple sampling rates. The process measurements are stored and available continuously in real-time, but the quality measurements have much lower sampling rate. In order to account for this discrepancy, the online process measurements are down-sampled to match the sampling frequency of the lab measurements, and subsequently, soft sensors are can be developed to estimated the final product quality. With the soft sensor in place, the process needs to be optimized to maximize the plant efficiency. Using the real-time optimization, the optimal sequence of manipulated inputs that minimizes the off-spec products are calculated. In addition, the optimal sequences of setpoints can be calculated by carrying out the scheduling calculation with the process model. Traditionally, the scheduling calculation is carried out without taking the process dynamics into account, which could result in off-spec products if a disturbance is introduced. Incorporating the process dynamics into the scheduling layer poses many different challenges numerically. The proposed time scale bridging model (SBM) is able to capture the input-output behavior of the process while greatly reducing the computational complexity and time.Item DSP operating systems(2011-12) Kardonik, Michael; Garg, Vijay K. (Vijay Kumar), 1963-This report presents operating systems that are designed to run on some of today’s the most popular DSP platforms. We look at functionally that those OSes provide to users, how they compare to general market embedded OS (like VxWorks, Linux), how they fit newest DSP platforms that features multicore architecture and highly integrated SoC. We also want to understand how those OSes can be utilized to implement selected real-time scheduling approaches.Item Dynamic scheduling system based on changes in job characteristics(Texas Tech University, 1997-12) Buraparate, VirojDynamic Scheduling System (DSS) Based on Changes in Job Characteristics is a system that provides adjustability to a current schedule as a consequence of unpredictable or predictable changes. Changes in manufacturing systems are those that occur during production and cause the systems to behave unpredictably. The understanding of the relationship between these changes and their effects can be used to lessen such manufacturing problems. The main concept of this scheduling system is to continuously monitor and predict a manufacturing system's status so that as soon as a change is detected or able to be predicted, this scheduling system will react by offering new production schedules to lessen the effects of this change. This system will integrate several techniques (e.g., control chart, forecasting model, linear regression, and statistical analysis) to provide a scheduling system that can be used in a dynamic manufacturing system. This dissertation shows in detail how to develop and test a DSS prototype. Simulation modeling and statistical analysis are used as a basis to select appropriate variables in this prototype. A hypothetical sk-machine dynamic job shop is developed by using GPSS/H simulation language to compare three performance measures, which are weighted mean flow time, weighted mean tardiness, and weighted mean lateness obtained from DSS prototype versus four dispatching rules (SPT, S/OPN, FIFO, and EDD). By comparing results from 300 random test cases, it is found that generally DSS can produce resuhs as good as the best results obtained among SPT, S/OPN, FIFO, and EDD 84% ofthe time. However, in weighted mean flow time and weighted mean lateness performance measures, DSS has matched up to 95% ofthe best results. Thus, based on these sunulations, this prototype of DSS has shown that by incorporating the abilities to monitor, forecast, and adjust the current schedules in a dynamic manufacturing system, undesirable results can be avoided.Item Energy Efficient Scheduling for Real-Time Systems(2012-02-14) Gupta, NikhilThe goal of this dissertation is to extend the state of the art in real-time scheduling algorithms to achieve energy efficiency. Currently, Pfair scheduling is one of the few scheduling frameworks which can optimally schedule a periodic real-time taskset on a multiprocessor platform. Despite the theoretical optimality, there exist large concerns about efficiency and applicability of Pfair scheduling in practical situations. This dissertation studies and proposes solutions to such efficiency and applicability concerns. This dissertation also explores temperature aware energy management in the domain of real-time scheduling. The thesis of this dissertation is: the implementation efficiency of Pfair scheduling algorithms can be improved. Further, temperature awareness of a real-time system can be improved while considering variation of task execution times to reduce energy consumption. This thesis is established through research in a number of directions. First, we explore the applicability of Dynamic Voltage and Frequency Scaling (DVFS) feature in the underlying platform, within Pfair scheduled systems. We propose techniques to reduce energy consumption in Pfair scheduling by using DVFS. Next, we explore the problem of quantum size selection in Pfair scheduled system so that runtime overheads are minimized. We also propose a hardware design for a central Pfair scheduler core in a multiprocessor system to minimized the overheads and energy consumption of Pfair scheduling. Finally, we propose a temperature aware energy management scheme for tasks with varying execution times.Item Energy efficient scheduling techniques for real-time embedded systems(Texas A&M University, 2004-09-30) Prathipati, Rajesh BabuBattery-powered portable embedded systems have been widely used in many applications. These embedded systems have to concurrently perform a multitude of complex tasks under stringent time constraints. As these systems become more complex and incorporate more functionality, they became more power-hungry. Thus, reducing power consumption and extending battery lifespan while guaranteeing the timing constraints has became a critical aspect in designing such systems. This gives rise to three aspects of research: (i) Guaranteeing the execution of the hard real-time tasks by their deadlines, (ii) Determining the minimum voltage under which each task can be executed, and (iii) Techniques to take advantage of run-time variations in the execution times of tasks. In this research, we present techniques that address the above aspects in single and multi processor embedded systems. We study the performance of the proposed techniques on various benchmarks in terms of energy savings.Item Exploiting hardware heterogeneity and parallelism for performance and energy efficiency of managed languages(2015-12) Jibaja, Ivan; Witchel, Emmett; McKinley, Kathryn S.; Blackburn, Stephen M; Batory, Don; Lin, CalvinOn the software side, managed languages and their workloads are ubiquitous, executing on mobile, desktop, and server hardware. Managed languages boost the productivity of programmers by abstracting away the hardware using virtual machine technology. On the hardware side, modern hardware increasingly exploits parallelism to boost energy efficiency and performance with homogeneous cores, heterogenous cores, graphics processing units (GPUs), and vector instructions. Two major forms of parallelism are: task parallelism on different cores and vector instructions for data parallelism. With task parallelism, the hardware allows simultaneous execution of multiple instruction pipelines through multiple cores. With data parallelism, one core can perform the same instruction on multiple pieces of data. Furthermore, we expect hardware parallelism to continue to evolve and provide more heterogeneity. Existing programming language runtimes must continuously evolve so programmers and their workloads may efficiently utilize this evolving hardware for better performance and energy efficiency. However, efficiently exploiting hardware parallelism is at odds with programmer productivity, which seeks to abstract hardware details. My thesis is that managed language systems should and can abstract hardware parallelism with modest to no burden on developers to achieve high performance, energy efficiency, and portability on ever evolving parallel hardware. In particular, this thesis explores how the runtime can optimize and abstract heterogenous parallel hardware and how the compiler can exploit data parallelism with new high-level languages abstractions with a minimal burden on developers. We explore solutions from multiple levels of abstraction for different types of hardware parallelism. (1) For asymmetric multicore processors (AMP) which have been recently introduced, we design and implement an application scheduler in the Java virtual machine (JVM) that requires no changes to existing Java applications. The scheduler uses feedback from dynamic analyses that automatically identify critical threads and classifies application parallelism. Our scheduler automatically accelerates critical threads, honors thread priorities, considers core availability and thread sensitivity, and load balances scalable parallel threads on big and small cores to improve the average performance by 20% and energy efficiency by 9% on frequency-scaled AMP hardware for scalable, non-scalable, and sequential workloads over prior research and existing schedulers. (2) To exploit vector instructions, we design SIMD.js, a portable single instruction multiple data (SIMD) language extension for JavaScript (JS), and implement its compiler support that together add fine-grain data parallelism to JS. Our design principles seek portability, scalable performance across various SIMD hardware implementations, performance neutral without SIMD hardware, and compiler simplicity to ease vendor adoption on multiple browsers. We introduce type speculation, compiler optimizations, and code generation that convert high-level JS SIMD operations into minimal numbers of SIMD native instructions. Finally, to accomplish wide adoption of our portable SIMD language extension, we explore, analyze, and discuss the trade-offs of four different approaches that provide the functionality of SIMD.js when vector instructions are not supported by the hardware. SIMD.js delivers an average performance improvement of 3.3× on micro benchmarks and key graphic algorithms on various hardware platforms, browsers, and operating systems. These language extension and compiler technologies are in the final approval process to be included in the JavaScript standards. This thesis shows using virtual machine technologies protects programmers from the underlying details of hardware parallelism, achieves portability, and improves performance and energy efficiency.Item Fluid and queueing networks with Gurvich-type routing(2015-08) Sisbot, Emre Arda; Hasenbein, John J.; Bickel, James Eric; Cudina, Milica; Djurdjanovic, Dragan; Khajavirad, AidaQueueing networks have applications in a wide range of domains, from call center management to telecommunication networks. Motivated by a healthcare application, in this dissertation, we analyze a class of queueing and fluid networks with an additional routing option that we call Gurvich-type routing. The networks we consider include parallel buffers, each associated with a different class of entity, and Gurvich-type routing allows to control the assignment of an incoming entity to one of the classes. In addition to routing, scheduling of entities is also controlled as the classes of entities compete for service at the same station. A major theme in this work is the investigation of the interplay of this routing option with the scheduling decisions in networks with various topologies. The first part of this work focuses on a queueing network composed of two parallel buffers. We form a Markov decision process representation of this system and prove structural results on the optimal routing and scheduling controls. Via these results, we determine a near-optimal discrete policy by solving the associated fluid model along with perturbation expansions. In the second part, we analyze a single-station fluid network composed of N parallel buffers with an arbitrary N. For this network, along with structural proofs on the optimal scheduling policies, we show that the optimal routing policies are threshold-based. We then develop a numerical procedure to compute the optimal policy for any initial state. The final part of this work extends the analysis of the previous part to tandem fluid networks composed of two stations. For two different models, we provide results on the optimal scheduling and routing policies.Item Fundamentals of distributed transmission in wireless networks : a transmission-capacity perspective(2011-05) Liu, Chun-Hung; Andrews, Jeffrey G.; Shakkottai, Sanjay; Arapostathis, Ari; Morton, David; Vishwanath, SriramInterference is a defining feature of a wireless network. How to optimally deal with it is one of the most critical and least understood aspects of decentralized multiuser communication. This dissertation focuses on distributed transmission strategies that a transmitter can follow to achieve reliability while reducing the impact of interference. The problem is investigated from three directions : distributed opportunistic scheduling, multicast outage and transmission capacity, and ergodic transmission capacity, which study distributed transmission in different scenarios from a transmission-capacity perspective. Transmission capacity is spatial throughput metric in a large-scale wireless network with outage constraints. To understand the fundamental limits of distributed transmission, these three directions are investigated from the underlying tradeoffs in different transmission scenarios. All analytic results regarding the three directions are rigorously derived and proved under the framework of transmission capacity. For the first direction, three distributed opportunistic scheduling schemes -- distributed channel-aware, interferer-aware and interferer-channel-aware scheduling are proposed. The main idea of the three schemes is to avoid transmitting in a deep fading and/or sever interfering context. Theoretical analysis and simulations show that the three schemes are able to achieve high transmission capacity and reliability. The second direction focuses on the study of the transmission capacity problem in a distributed multicast transmission scenario. Multicast transmission, wherein the same packet must be delivered to multiple receivers, has several distinctive traits as opposed to more commonly studied unicast transmission. The general expression for the scaling law of multicast transmission capacity is found and it can provide some insight on how to do distributed single-hop and multi-hop retransmissions. In the third direction, the transmission capacity problem is investigated for Markovain fading channels with temporal and spatial ergodicity. The scaling law of the ergodic transmission capacity is derived and it can indicate a long-term distributed transmission and interference management policy for enhancing transmission capacity.Item Management views on performance-based scheduling(Texas Tech University, 2001-12) Tobin, Eric R.This study examined restaurant managers' views of factors impacting employee tenure and the implementation of a performance-based scheduling system (PBS). A mailed written survey addressed perceived reasons for turnover, rewards restaurants offered to servers, aspects of a PBS, and types of restaurants that could utilize this customer service tool. A total of 512 questionnaires were distributed to full-service restaurants and 267 surveys were returned for a response rate of 52%. Study respondents were mostly female working in multiple locations prior to a short tenure at their current restaurant. Most restaurants posted a weekly sales volume of under $75,000 and employed less than 50 servers. Turnover rate was greater than 75% and managers spent between one hour and fifteen minutes and two hours scheduling weekly. Servers' tenures also were brief. Separation issues included management conflict and lack of performance. Servers were easily motivated by receiving regular performance evaluations, feeling appreciated, and perceiving a level of flexibility in their employment situation. Management utilized scheduling techniques primarily based on seniority rather than performance record. Managers thought PBS needed inclusion of multiple evaluation criteria with the exception of total guests served. Tying PBS to customer volume instead of guest service quality was thought to contribute to increased server aggression. Overall, PBS was thought to improve service, improve teamwork, and increase sales. Managers thought that PBS impacted employee motivation. PBS also reduced scheduling time and staff turnover. PBS appeared to be a viable option to solve employee and operational performance issues. Creating an environment where people want to work is important to retaining employees and increasing performance to yield sales levels contributing to profit. Management must provide the necessary elements to keep the employees motivated and interested. Results of this project indicated that a scheduling system recognizing and rewarding servers providing exceptional and consistent guest service could assist in achieving both employer and employee goals.Item Model for improving the logistics processes for propane delivery(2008-08) Santithammarak, Vanlapha; Smith, Milton L.; Simonton, James L.; Kobza, John E.The scheduling problem of service times and the method to provide service to customers by minimizing time, cost, and distance are important for service providers and are challenging research topics. This research problem concerns the scheduling of the delivery of propane to a large number of customers in a rural area. Prediction of customer demands must be done before service times are scheduled. In this research, there are two types of customer demands; one is regular customer demands and the other is call-in customer demands. This problem is similar to a situation having regular tasks and emergency tasks that come up to interrupt the regular tasks. The regular customer demands are scheduled and the regular service is assumed to be the routine every day. The data collection of regular customer demands are observed from historical demands of regular customers and separated by season as summer and winter. Afterwards, the proper mean (ë) number of call-in customers per day is calculated, and it is established that the Poisson distribution applies. Subsequently the call-in services are scheduled after these customers call for service. The challenge of this research is that each customer service time is scheduled in the time limit per day. In addition, the locations of regular customers are investigated before grouping regular customers together to provide service on the same day. This research focuses on the method to integrate the schedule containing regular customers with the call-in customers in each day.Item Modeling, control, and optimization of combined heat and power plants(2014-05) Kim, Jong Suk; Edgar, Thomas F.Combined heat and power (CHP) is a technology that decreases total fuel consumption and related greenhouse gas emissions by producing both electricity and useful thermal energy from a single energy source. In the industrial and commercial sectors, a typical CHP site relies upon the electricity distribution network for significant periods, i.e., for purchasing power from the grid during periods of high demand or when off-peak electricity tariffs are available. On the other hand, in some cases, a CHP plant is allowed to sell surplus power to the grid during on-peak hours when electricity prices are highest while all operating constraints and local demands are satisfied. Therefore, if the plant is connected with the external grid and allowed to participate in open energy markets in the future, it could yield significant economic benefits by selling/buying power depending on market conditions. This is achieved by solving the power system generation scheduling problem using mathematical programming. In this work, we present the application of mixed-integer nonlinear programming (MINLP) approach for scheduling of a CHP plant in the day-ahead wholesale energy markets. This work employs first principles models to describe the nonlinear dynamics of a CHP plant and its individual components (gas and steam turbines, heat recovery steam generators, and auxiliary boilers). The MINLP framework includes practical constraints such as minimum/maximum power output and steam flow restrictions, minimum up/down times, start-up and shut-down procedures, and fuel limits. We provide case studies involving the Hal C. Weaver power plant complex at the University of Texas at Austin to demonstrate this methodology. The results show that the optimized operating strategies can yield substantial net incomes from electricity sales and purchases. This work also highlights the application of a nonlinear model predictive control scheme to a heavy-duty gas turbine power plant for frequency and temperature control. This scheme is compared to a classical PID/logic based control scheme and is found to provide superior output responses with smaller settling times and less oscillatory behavior in response to disturbances in electric loads.Item Optimization of hybrid dynamic/steady-state processes using process integration(2009-06-02) Grooms, Daniel DouglasMuch research in the area of process integration has focused on steady-state processes. However, there are many process units that are inherently unsteady-state or perform best when operated in an unsteady-state manner. Unsteady-state units are vital to chemical processes but are unable to be included in current process optimization methods. Previous methods to optimize processes containing unsteady-state units place restrictions or constraints on their use. This optimization still does not give the best system design because the solution found will only be the best out of the available options which likely excludes the true optimal design. To remedy this, a methodology was created to incorporate unsteady-state process units into process optimization analysis. This methodology is as general as possible. Unlike many existing unsteadystate optimization methods, it determines all three main components of process design: the network configuration, sizes of units, and operation schedule. This generality ensures that the truly optimal process design will be found. Three problems were solved to illustrate the solution methodology. First, a general mass exchange network was optimized. The optimization formulation resulted in a mixed-integer nonlinear program, and linearization techniques were used to find the global solution. A property interception network was also optimized, the first work done using property integration for systems with unsteady-state behavior. Finally, an industrial semi-batch water purification system was optimized. This problem showed how process integration could be used to optimize a hybrid system and gain insights into the process under many different operating conditions.Item Optimization of Supply Chain Management and Facility Location Selection for a Biorefinery(2012-02-14) Bowling, Ian MichaelIf renewable energy and biofuels are to attain success in the market place, each step of their production and the system as a whole must be optimized to increase material and energy efficiency, reduce production cost and create a competitive alternative to fossil fuels. Systems optimization techniques may be applied to product selection, process design and integration, feedstock procurement and supply chain management to improve performance. This work addresses two problems facing a biorefinery: technology selection and feedstock scheduling in the face of varying feedstock supply and cost. Also addressed is the optimization of a biorefinery supply chain with respect to distributed processing of biomass to bio-products via preprocessing hubs versus centralized processing and facility location selection. Two formulations are proposed that present a systematic approach to address each problem. Case studies are included to demonstrate model capabilities for both formulations. The scheduling model results display model sensitivity to feedstock price and transport distance penalized through carbon dioxide emissions. The distributed model shows that hubs may be used to extend the operating radius of a biorefinery and thereby increase profits.Item Planning and scheduling in semiconductor manufacturing(2010-08) Zarifoglu, Emrah; Kutanoglu, Erhan; Hasenbein, John J.; Morton, David P.; Popova, Elmira; Gilbert, Stephen M.Semiconductor manufacturing is one of the most complex existing manufacturing systems. It requires constant improvement to meet demands and expectations. This dissertation studies semiconductor manufacturing under three main topics, preventive maintenance scheduling, lot size management and AMHS scheduling. We first provide an optimization based decomposition algorithm and a heuristic algorithm to solve preventive maintenance scheduling problem along with direct optimization. Then, we develop an analytic tool to investigate and find optimal lot sizes to run in a manufacturing environment to minimize cycle time. Finally, we propose an optimization based AMHS scheduling algorithm and compare its performance to a myopic algorithm.Item Priority Based Switch Allocator in Adaptive Physical Channel Regulator for On Chip Interconnects(2014-08-04) Mahapatra, SonaliChip multiprocessors (CMPs) are now popular design paradigm for microprocessors due to their power, performance and complexity advantages where a number of relatively simple cores are integrated on a single die. On chip interconnection network (NoC) is an excellent architectural paradigm which offers a stable and generalized communication platform for large scale of chip multiprocessors. The existing model APCR has three regulation schemes designed at switch allocation stage of NoC router pipelining, such as monopolizing, fair-sharing and channel-stealing. Its aim is to fairly allocate physical bandwidth in the form of flit level transmission unit while breaking the conventional assumptions i.e.its size is same as phit size. They have implemented channel-stealing scheme using the existing round-robin scheduler which is a well known scheduling algorithm for providing fairness, which is not an optimal solution. In this thesis, we have extended the efficiency of APCR model and propose three efficient scheduling policies for the channel stealing scheme in order to provide better quality of service (QoS). Our work can be divided into three parts. In the first part, we implemented ratio based scheduling technique in which we keep track of average number of its sent from each input in every cycle. It not only provides fairness among virtual channels (VCs), but also increases the saturation throughput of the network. In the second part, we have implemented an age based scheduling technique where we prioritize the VC, based on the age of the requesting flits. The age of each request is calculated as the difference between the time of injection and the current simulation time. Age based scheduler minimizes the packet latency. In the last part, we implemented a Static-Priority based scheduler. In this case, we arbitrarily assign random priorities to the packets at the time of their injection into the network. In this case, the high priority packets can be forwarded to any of the VCs, whereas the low priority packets can be forwarded to a limited number of VCs. So, basically Static-Priority based scheduler limits the accessibility on the number of VCs depending upon the packet priority. We study the performance metrics such as the average packet latency, and saturation throughput resulted by all the three new scheduling techniques. We demonstrate our simulation results for all three scheduling policies i.e. bit complement, transpose and uniform random considering from very low (no load) to high load injection rates. We evaluate the performance improvement because of our proposed scheduling techniques in APCR comparing with the performance of basic NoC design. The performance is also compared with the results found in monopolizing, fair-sharing and round-robin schemes for channel-stealing of APCR. It is observed from the simulation results using our detailed cycle-accurate simulator that our new scheduling policies implemented in APCR model improves the network throughput by 10% in case of synthetic workloads, compared with the existing round-robin scheme. Also, our scheduling policy in APCR model outperforms the baseline router by 28X under synthetic workloads.Item Reliable Downlink Scheduling for Wireless Networks with Real-Time and Non-Real Time Clients(2014-08-05) Jain, AbhishekIn this thesis, we studied the problem of designing a down link scheduling policy to serve multiple types of clients from a base station in a time-varying wireless network. An ideal scheduling policy is fair among the clients, provides reliability to the clients, achieves high system throughput and prevents strategic clients from choosing incorrect means. The existing scheduling policies fail to achieve one or more of these features. The Proportional Fair scheduling policy for example, fails to provide reliability to the real time clients, while Round Robin policy provides reliability to the clients but fails to achieve high system throughput in a time-varying wireless network. Apart from these policies, there are scheduling policies which prioritize clients based on their delay requirements. Here, a client with lower priority may choose incorrect means like claiming false types of flows to obtain a better performance. A non-real time client may pretend to be a real time client if doing so, which might aid it to achieve better performance in terms of average throughput. We proposed a new scheduling policy that is not only proportionally fair but also provides reliability to the mixture of real time and non-real time clients over a shared wireless channel. Our proposed policy aims to serve clients with different service requirements and provides best service to the clients which furnish true information about their service requirements; the client claiming false service requirements is penalized with the reduced performance. We theoretically demonstrate the effectiveness of the algorithm by considering uniform distribution of service rates of all the clients. We then provide extensive simulation results of our scheduling policy under the fast fading Rayleigh model to show that this policy can be easily extended in wireless networks. We also show that our policy outperforms existing policies in providing better reliability to the clients and unlike other common policies, our policy degrades the performance of a client that chooses incorrect means.Item Scheduling and resource allocation for mobile broadband networks(2014-12) Ishiguro, Arthur Go; Andrews, Jeffrey G.; De Veciana, GustavoUnlike traditional cellular networks, where voice calls dominate the network traffic, modern mobile traffic is created by of a mixture of both voice and broadband data services. The heterogeneous mixture of voice and data services in mobile broadband networks includes voice calls, web browsing, file transfers, video streaming, and social media applications. Consequently, network planning and radio resource management strategies must be aware of the quality of experience perceived by the users using various types of applications. In this report, we explore the traffic characteristics, scheduling and resource allocation strategies, and user experience models in mobile broadband networks.