Browsing by Subject "Optimization"
Now showing 1 - 20 of 115
Results Per Page
Sort Options
Item A New Design Method Framework for Open Origami Design Problems(2014-08-12) Li, WeiWith the development of computer science and manufacturing techniques, modern origami is no longer just used for making artistic shapes as its traditional counterpart was many centuries ago. Instead, the outstanding lightweight and high flexibility of origami structures has expanded their engineering application in aerospace, medical devices, and architecture. In order to support the automatic design of more complex modern origami structures, several computational origami design methods have been established. However these methods still focus on the problem of determining a crease pattern to fold into an exact pre-determined shape. And these methods apply deductive logic and function for only one type of topological origami structure. In order to drop the topological constraints on the shapes, this dissertation introduces the research on the development and implementation of the abductive evolutionary design methods to open origami design problems, which is asking for their designs to achieve geometric and functional requirements instead of an exact shape. This type of open origami design problem has no formal computational solutions yet. Since the open origami design problem requires searching for solutions among arbitrary candidates without fixing to a certain topological formation, it is NP-complete in computational complexity. Therefore, this research selects the genetic algorithm (GA) and one of its variations ? the computational evolutionary embryogeny (CEE) ? to solve origami problems. The dissertation made two major contributions. One contribution is on creating the GA-based/abstract design method framework on open origami design problems. The other contribution is on the geometric representation of origami designs that directs the definition and mapping of their genetic representation and physical representation. This research introduced two novel geometric representations, which are the ?ice-cracking? and the pixelated multicellular representation (PMR). The proposed design methods and the adapted evolutionary operators have been testified by two open origami design problems of making flat-foldable shapes with desired profile area and rigid-foldable 3D water containers with desired volume. The results have proved the proposed methods widely applicable and highly effective in solving the open origami design problems.Item A New Method to Assess Best Management Practice Efficiency to Optimize Storm Water Management(2014-12-16) Tu, Min-chengFor TSS, TN, and TP, this study examined the relationship between BMP pollutant removal efficiency and environmental factors such as ratio of BMP/catchment area, dominant land use, ratio of the dominant land use/catchment area, slope, and BMP type, and derived optimal installation plans based on different criteria. A SWMM model was built for the Shoal Creek Watershed in Austin, Texas. Inverse modeling (i.e. fitting model to observation data) was used to calibrate the BMP removal efficiency. The relationship can then be derived by using multiple linear regression analysis with BMP removal efficiency as the response variable and the environmental factors as predictive variables. However, before inverse modeling can be applied, SWMM pollutant buildup and washoff parameters must be derived. A few types of land use were identified as main source of pollutant. The numerical distribution of the parameters suggested that the buildup and the washoff parameters are controlled by forces of different spatial scales. Also, the SWMM model simulated only direct runoff in order to simplify the calibration. Mean pollutant concentration in base flow is required to convert observed concentration to that in direct runoff. The Shoal Creek Watershed discharges into Lady Bird Lake, and changes of water quality in the lake during base flow dominant dates were used to estimate concentration in base flow from Shoal Creek Watershed. Water quality of the lake was determined by Landsat imagery. The equations predicting BMP removal efficiency based on environmental factors were analyzed to show the most efficient and least efficient type of BMP and the land use that BMPs will have the highest and lowest removal efficiency for TSS, TN, and TP. Two planning criteria were utilized for the optimal BMP plans and different time frames were considered. One criterion is goal concentrations in runoff, and the other is a combination of goal concentration and a budget constraint. For each criterion, the associated optimal plan showed an areal ratio between BMP types throughout different time frame. It was also found that the Shoal Creek Watershed needs more BMPs. Suggestions to the Environmental Criteria Manual of Austin were also made based on this study.Item A Process Integration Approach to the Strategic Design and Scheduling of Biorefineries(2011-02-22) Elms, Rene ?DavinaThis work focused upon design and operation of biodiesel production facilities in support of the broader goal of developing a strategic approach to the development of biorefineries. Biodiesel production provided an appropriate starting point for these efforts. The work was segregated into two stages. Various feedstocks may be utilized to produce biodiesel, to include virgin vegetable oils and waste cooking oil. With changing prices, supply, and demand of feedstocks, a need exists to consider various feedstock options. The objective of the first stage was to develop a systematic procedure for scheduling and operation of flexible biodiesel plants accommodating a variety of feedstocks. This work employed a holistic approach and combination of process simulation, synthesis, and integration techniques to provide: process simulation of a biodiesel plant for various feedstocks, integration of energy and mass resources, optimization of process design and scheduling, and techno-economic assessment and sensitivity analysis of proposed schemes. An optimization formulation was developed to determine scheduling and operation for various feedstocks and a case study solved to illustrate the merits of the devised procedure. With increasing attention to the environmental impact of discharging greenhouse gases (GHGs), there has been growing public pressure to reduce the carbon footprint associated with fossil fuel use. In this context, one key strategy is substitution of fossil fuels with biofuels such as biodiesel. Design of biodiesel plants has traditionally been conducted based on technical and economic criteria. GHG policies have the potential to significantly alter design of these facilities, selection of feedstocks, and scheduling of multiple feedstocks. The objective of the second stage was to develop a systematic approach to design and scheduling of biodiesel production processes while accounting for the effect of GHG policies. An optimization formulation was developed to maximize profit of the process subject to flowsheet synthesis and performance modeling equations. The carbon footprint is accounted for through a life cycle analysis (LCA). The objective function includes a term reflecting the impact of the LCA of a feedstock and its processing to biodiesel. A multiperiod approach was used and a case study solved with several scenarios of feedstocks and GHG policies.Item A Research on Production Optimization of Coupled Surface and Subsurface Model(2013-07-09) Iemcholvilert, SevapholOne of the main objectives in the Oil & Gas Industry is to constantly improve the reservoir management capabilities by using production optimization strategies that can positively impact the so-called net-present value (NPV) of a given project. In order to achieve this goal the industry is faced with the difficult task of maximizing hydrocarbon production and minimizing unwanted fluids, such as water, while sustaining or even enhancing the reservoir recovery factor by handling properly the fluids at surface facilities. A key element in this process is the understanding of the interactions between subsurface and subsurface dynamics in order to provide insightful production strategies which honor reservoir management surface facility constraints. The implementation of the ideal situation of fully coupling surface/subsurface has been hindered by the required computational efforts involved in the process. Consequently, various types of partially coupling that require less computational efforts are practically implemented. Due to importance of coupling surface and subsurface model on production optimization and taking the advantage of advancing computational performance, this research explores the concept of surface and subsurface model couplings and production optimization. The research aims at demonstrating the role of coupling of surface and subsurface model on production optimization under simple production constraint (i.e. production and injection pressure limit). The normal production prediction runs with various reservoir description (homogeneous-low permeability, homogeneous-high permeability, and heterogeneous permeability) and different fluid properties (dead-oil PVT and lived-oil PVT) were performed in order to understand the effect of coupling level, and coupling scheme with different reservoir descriptions and fluid properties on production and injection rate prediction. The result shows that for dead-oil PVT, the production rate from different coupling schemes in homogeneous and heterogeneous reservoir is less sensitive than lived-oil PVT cases. For lived-oil PVT, the production rate from different coupling schemes in homogeneous high permeability and heterogeneous permeability are more sensitive than homogeneous low permeability. The production optimization on water flooding under production and injection constraint cases is considered here also.Item A reverse osmosis treatment process for produced water: optimization, process control, and renewable energy application(2009-06-02) Mareth, BrettFresh water resources in many of the world's oil producing regions, such as western Texas, are scarce, while produced water from oil wells is plentiful, though unfit for most applications due to high salinity and other contamination. Disposing of this water is a great expense to oil producers. This research seeks to advance a technology developed to treat produced water by reverse osmosis and other means to render it suitable for agricultural or industrial use, while simultaneously reducing disposal costs. Pilot testing of the process thus far has demonstrated the technology's capability to produce good-quality water, but process optimization and control were yet to be fully addressed and are focuses of this work. Also, the use of renewable resources (wind and solar) are analyzed as potential power sources for the process, and an overview of reverse osmosis membrane fouling is presented. A computer model of the process was created using a dynamic simulator, Aspen Dynamics, to determine energy consumption of various process design alternatives, and to test control strategies. By preserving the mechanical energy of the concentrate stream of the reverse osmosis membrane, process energy requirements can be reduced several fold from that of the current configuration. Process control schemes utilizing basic feedback control methods with proportional-integral (PI) controllers are proposed, with the feasibility of the strategy for the most complex process design verified by successful dynamic simulation. A macro-driven spreadsheet was created to allow for quick and easy cost comparisons of renewable energy sources in a variety of locations. Using this tool, wind and solar costs were compared for cities in regions throughout Texas. The renewable energy resource showing the greatest potential was wind power, with the analysis showing that in windy regions such as the Texas Panhandle, wind-generated power costs are approximately equal to those generated with diesel fuel.Item A Systems-Integration Approach to Optimizing the Water-Energy Nexus in Energy Surplus Processes(2014-10-02) Gabriel, Kerron JudeThe objective of this research was to develop novel tools for systematically optimizing the benefits of the water-energy nexus in processes with surplus energy. The developed approach consists of the following problems: (1) screening of the processes to identify potential for cogeneration of water and power, (2) development of a flexible water generating process, (3) synthesis of the integrated water and power facility and (4) thermoeconomic analysis of the integrated process. In the screening problem, a targeting and benchmarking approach was used to identify the limits of the process for producing water and power from surplus energy. Various designs of the process were explored to compare the effects of process change on the overall targets for water and power generation. For the water generating process problem, a new mathematical formulation was proposed for the thermal desalination of saline water. The new formulation consisted of a mass flowrate decoupling approach that reduced the overall mass and energy balances to a linear programming (LP) problem. This approach was used to develop novel and flexible Multi-effect distillation with thermo vapor compression (MED-TV C) processes that balanced the tradeoff between economics and thermal efficiency. In the synthesis problem, an integrated water and power generating facility was developed based on the excess heat sources from the process. The synthesis approach incorporated the use of four building blocks: (a) Total site analysis to identify appropriate steam level connections in the process, (b) heat exchange network synthesis for producing steam and boiler feed water utilities from excess process heat, (c) turbine network development for power generation and (d) water generation and integration via direct recycle. In thermoeconomic analysis, the integrated facility from the synthesis approach was evaluated and optimized to maximize the intrinsic balance of the water-energy nexus. The analysis utilized extensive literature sources, fundamental chemical engineering practices as well as mathematical programming techniques to yield insightful conclusions. The Gas-to-liquids process was strategically used as the case study to demonstrate the developed methodologies due to its ability to produce not only fuels and synthetic lubricants but potable water and power as part of its commodity portfolio.Item Accounting for reservoir uncertainties in the design and optimization of chemical flooding processes(2012-08) Rodrigues, Neil; Delshad, Mojdeh; Pope, Gary A.Chemical Enhanced Oil Recovery methods have been growing in popularity as a result of the depletion of conventional oil reservoirs and high oil prices. These processes are significantly more complex when compared to waterflooding and require detailed engineering design before field-scale implementation. Coreflood experiments that have been performed on reservoir rock are invaluable for obtaining parameters that can be used for field-scale flooding simulations. However, the design used in these floods may not always scale to the field due to heterogeneities, chemical retention, mixing and dispersion effects. Reservoir simulators can be used to identify an optimum design that accounts for these effects but uncertainties in reservoir properties can still cause poor project results if it not properly accounted for. Different reservoirs will be investigated in this study, including more unconventional applications of chemical flooding such as a 3md high-temperature, carbonate reservoir and a heterogeneous sandstone reservoir with very high initial oil saturation. The goal of the research presented here is to investigate the impact that select reservoir uncertainties can have on the success of the pilot and to propose methods to reduce the sensitivity to these parameters. This research highlights the importance of good mobility control in all the case studies, which is shown to have a significant impact on the economics of the project. It was also demonstrated that a slug design with good mobility control is less sensitive to uncertainties in the relative permeability parameters. The research also demonstrates that for a low-permeability reservoir, surfactant propagation can have a significant impact on the economics of a Surfactant-Polymer Flood. In addition to mobilizing residual oil and increasing oil recovery, the surfactant enhances the relative permeability and this has a significant impact on increasing the injectivity and reducing the project life. Injecting a high concentration of surfactant also makes the design less sensitive to uncertainties in adsorption. Finally, it was demonstrated that for a heterogeneous reservoir with high initial oil saturation, optimizing the salinity gradient will significantly increase the oil recovery and will also make the process less sensitive to uncertainties in the cation exchange capacity.Item Accounting for the effects of rehabilitation actions on the reliability of flexible pavements: performance modeling and optimization(2009-05-15) Deshpande, Vighnesh PrakashA performance model and a reliability-based optimization model for flexible pavements that accounts for the effects of rehabilitation actions are developed. The developed performance model can be effectively implemented in all the applications that require the reliability (performance) of pavements, before and after the rehabilitation actions. The response surface methodology in conjunction with Monte Carlo simulation is used to evaluate pavement fragilities. To provide more flexibility, the parametric regression model that expresses fragilities in terms of decision variables is developed. Developed fragilities are used as performance measures in a reliability-based optimization model. Three decision policies for rehabilitation actions are formulated and evaluated using a genetic algorithm. The multi-objective genetic algorithm is used for obtaining optimal trade-off between performance and cost. To illustrate the developed model, a numerical study is presented. The developed performance model describes well the behavior of flexible pavement before as well as after rehabilitation actions. The sensitivity measures suggest that the reliability of flexible pavements before and after rehabilitation actions can effectively be improved by providing an asphalt layer as thick as possible in the initial design and improving the subgrade stiffness. The importance measures suggest that the asphalt layer modulus at the time of rehabilitation actions represent the principal uncertainty for the performance after rehabilitation actions. Statistical validation of the developed response model shows that the response surface methodology can be efficiently used to describe pavement responses. The results for parametric regression model indicate that the developed regression models are able to express the fragilities in terms of decision variables. Numerical illustration for optimization shows that the cost minimization and reliability maximization formulations can be efficiently used in determining optimal rehabilitation policies. Pareto optimal solutions obtained from multi-objective genetic algorithm can be used to obtain trade-off between cost and performance and avoid possible conflict between two decision policies.Item Algorithms for an Unmanned Vehicle Path Planning Problem(2013-06-25) Qin, JiangleiUnmanned Vehicles (UVs) have been significantly utilized in military and civil applications over the last decade. Path-planning of UVs plays an important role in effectively using the available resources such as the UVs and sensors as efficiently as possible. The main purpose of this thesis is to address two path planning problems involving a single UV. The two problems we consider are the quota problem and the budget problem. In the quota problem, the vehicle has to visit a sufficient number of targets to satisfy the quota requirement on the total prize collected in the tour. In the budget problem, the vehicle has to comply with a constraint of the distance traveled by the UV. We solve both these problems using a practical heuristic called the prize-multiplier approach. This approach first uses a primal-dual algorithm to first assign the targets to the UV. The Lin ? Kernighan Heuristic (LKH) is then applied to generate a tour of the assigned targets for the UV. We tested this approach on two different vehicle models. One model is a simple vehicle which can move in any direction without a constraint on its turning radius. The other model is a Reeds-Shepp vehicle. We also modeled both problems in C++ using the multi-commodity flow formulations, and solved them to optimality by using the Concert Technology of CPLEX. We used the results generated by CPLEX to determine the quality of the solutions produced by the heuristics. By comparing the objective values of the obtained solutions and the running times of the heuristics and CPLEX, one can conclude that the proposed heuristics produce solutions with good quality to our problems within our desired time limits.Item Algorithms for VLSI Circuit Optimization and GPU-Based Parallelization(2010-07-14) Liu, YifangThis research addresses some critical challenges in various problems of VLSI design automation, including sophisticated solution search on DAG topology, simultaneous multi-stage design optimization, optimization on multi-scenario and multi-core designs, and GPU-based parallel computing for runtime acceleration. Discrete optimization for VLSI design automation problems is often quite complex, due to the inconsistency and interference between solutions on reconvergent paths in directed acyclic graph (DAG). This research proposes a systematic solution search guided by a global view of the solution space. The key idea of the proposal is joint relaxation and restriction (JRR), which is similar in spirit to mathematical relaxation techniques, such as Lagrangian relaxation. Here, the relaxation and restriction together provides a global view, and iteratively improves the solution. Traditionally, circuit optimization is carried out in a sequence of separate optimization stages. The problem with sequential optimization is that the best solution in one stage may be worse for another. To overcome this difficulty, we take the approach of performing multiple optimization techniques simultaneously. By searching in the combined solution space of multiple optimization techniques, a broader view of the problem leads to the overall better optimization result. This research takes this approach on two problems, namely, simultaneous technology mapping and cell placement, and simultaneous gate sizing and threshold voltage assignment. Modern processors have multiple working modes, which trade off between power consumption and performance, or to maintain certain performance level in a powerefficient way. As a result, the design of a circuit needs to accommodate different scenarios, such as different supply voltage settings. This research deals with this multi-scenario optimization problem with Lagrangian relaxation technique. Multiple scenarios are taken care of simultaneously through the balance by Lagrangian multipliers. Similarly, multiple objective and constraints are simultaneously dealt with by Lagrangian relaxation. This research proposed a new method to calculate the subgradients of the Lagrangian function, and solve the Lagrangian dual problem more effectively. Multi-core architecture also poses new problems and challenges to design automation. For example, multiple cores on the same chip may have identical design in some part, while differ from each other in the rest. In the case of buffer insertion, the identical part have to be carefully optimized for all the cores with different environmental parameters. This problem has much higher complexity compared to buffer insertion on single cores. This research proposes an algorithm that optimizes the buffering solution for multiple cores simultaneously, based on critical component analysis. Under the intensifying time-to-market pressure, circuit optimization not only needs to find high quality solutions, but also has to come up with the result fast. Recent advance in general purpose graphics processing unit (GPGPU) technology provides massive parallel computing power. This research turns the complex computation task of circuit optimization into many subtasks processed by parallel threads. The proposed task partitioning and scheduling methods take advantage of the GPU computing power, achieve significant speedup without sacrifice on the solution quality.Item Analysis of classical root-finding methods applied to digital maximum power point tracking for photovoltaic energy generation(2011-08) Chun, Seunghyun; Kwasinski, Alexis; Grady, William; Driga, Mircea; Hallock, Gary; Byoun, JaesooThis dissertation examines the application of various classical root finding methods to digital maximum power point tracking (DMPPT). An overview of root finding methods such as the Newton Raphson Method (NRM), Secant Method (SM), Bisection Method (BSM), Regula Falsi Method (RFM) and a proposed Modified Regula Falsi Method (MRFM) applied to photovoltaic (PV) applications is presented. These methods are compared among themselves. Some of their features are also compared with other commonly used maximum power point (MPP) tracking methods. Issues found when implementing these root finding methods based on continuous variables in a digital domain are explored. Some of these discussed issues include numerical stability, digital implementation of differential operators, and quantization error. Convergence speed is also explored. The analysis is used to provide practical insights into the design of a DMPPT based on classical root finding algorithms. A new DMPPT based on a MRFM is proposed and used as the basis for the discussion. It is shown that this proposed method is faster than the other discussed methods that ensure convergence to the MPP. The discussion is approached from a practical perspective and also includes theoretical analysis to support the observations. Extensive simulation and experimental results with hardware prototypes verify the analysis.Item Analysis of the power grid: structure and secure operations(2015-08) Deka, Deepjyoti; Vishwanath, Sriram; Baldick, Ross; Kwasinski, Alexis; Meyers, Lauren A.; Moorty, SainathPower Grids form one of the vital backbone-networks of our society providing electricity for daily socio-economic activities. Given its importance, there is a greater need to understand the structure and control of the power grid for fair power market computations and efficient delivery of electricity. This work studies two problems associated with different aspects of today's power grid network and combines techniques from network science, control theory and optimization to analyze them. The first problem relates to understanding the common structural features observed in several power grids across the world and developing a trackable modeling framework that incorporates these features. Such a framework can lead to insights on structural vulnerability of the grid and help design realistic test cases to study effects of structural and operational reinforcements as the grid evolves with time. We develop a generative model based on spatial point process theory that provably produces the distinct exponential degree distribution observed in several power grids. Further, critical graph parameters like diameter, eigen-spread, betweenness centralities and clustering coefficients are used to compare the performance of our framework in modeling the power grids in Western USA and under ERCOT in Texas. The second problem discussed here involves a detailed study of malicious data attacks on state estimation in the power grid. Such data attacks pose a serious threat to efforts related to implementing distributed control for efficient operations in the grid. We develop a graph-theoretic framework to analyze the design of optimal data attacks and study cost-optimal techniques to build resilience against them. The study involves attacks by a practical adversary capable of modifying meter readings as well as of jamming the flow of information from meters to the grid controller. We prove that the design of optimal `hidden' and `detectable' attacks can be formulated as constrained graph-cut problems that depend on the relative costs of adversarial techniques, and present algorithms for attack construction. Further, we design a new `topology' attack regime where an adversary changes beaker statuses of grid lines to affect state estimation in systems where all meter measurements are encrypted and hence secure from manipulation. We discuss bounds on the security requirements imposed by the developed attack models and design algorithms for determining the optimal protection strategy. This helps present an accurate characterization of grid vulnerability to general data attacks and eavesdroppers and motivates efforts to expand the presence of new secure meters to foil cyber attacks in the grid.Item Assembly and test operations with multipass requirement in semiconductor manufacturing(2014-05) Gao, Zhufeng; Bard, Jonathan, F.In semiconductor manufacturing, wafers are grouped into lots and sent to a separate facility for assembly and test (AT) before being shipped to the customer. Up to a dozen operations are required during AT. The facility in which these operations are performed is a reentrant flow shop consisting of several dozen to several hundred machines and up to a thousand specialized tools. Each lot follows a specific route through the facility, perhaps returning to the same machine multiple times. Each step in the route is referred to as a "pass." Lots in work in process (WIP) that have more than a single step remaining in their route are referred to as multi-pass lots. The multi-pass scheduling problem is to determine machine setups, lot assignments and lot sequences to achieve optimal output, as measured by four objectives related to key device shortages, throughput, machine utilization, and makespan, prioritized in this order. The two primary goals of this research are to develop a new formulation for the multipass problem and to design a variety of solution algorithms that can be used for both planning and real-time control. To begin, the basic AT model considering only single-pass scheduling and the previously developed greedy randomized adaptive search procedure (GRASP) along with its extensions are introduced. Then two alternative schemes are proposed to solve the multipass scheduling problem. In the final phase of this research, an efficient procedure is presented for prioritizing machine changeovers in an AT facility on a periodic basis that provides real-time support. In daily planning, target machine-tooling combinations are derived based on work in process, due dates, and backlogs. As machines finish their current lots, they need to be reconfigured to match their targets. The proposed algorithm is designed to run in real time.Item An assessment of the system costs and operational benefits of vehicle-to-grid schemes(2013-12) Harris, Chioke Bem; Webber, Michael E., 1971-With the emerging nationwide availability of plug-in electric vehicles (PEVs) at prices attainable for many consumers, electric utilities, system operators, and researchers have been investigating the impact of this new source of electricity demand. The presence of PEVs on the electric grid might offer benefits equivalent to dedicated utility-scale energy storage systems by leveraging vehicles' grid-connected energy storage through vehicle-to-grid (V2G) enabled infrastructure. Existing research, however, has not effectively examined the interactions between PEVs and the electric grid in a V2G system. To address these shortcomings in the literature, longitudinal vehicle travel data are first used to identify patterns in vehicle use. This analysis showed that vehicle use patterns are distinctly different between weekends and weekdays, seasonal interactions between vehicle charging, electric load, and wind generation might be important, and that vehicle charging might increase already high peak summer electric load in Texas. Subsequent simulations of PEV charging were performed, which revealed that unscheduled charging would increase summer peak load in Texas by approximately 1\%, and that uncertainty that arises from unscheduled charging would require only limited increases in frequency regulation procurements. To assess the market potential for the implementation of a V2G system that provides frequency regulation ancillary services, and might be able to provide financial incentives to participating PEV owners, a two-stage stochastic programming formulation of a V2G system operator was created. In addition to assessing the market potential for a V2G system, the model was also designed to determine the effect of the market power of the V2G system operator on prices for frequency regulation, the effect of uncertainty in real-time vehicle availability and state-of-charge on the aggregator's ability to provide regulation services, and the effect of different vehicle characteristics on revenues. Results from this model showed that the V2G system operator could generate revenue from participation in the frequency regulation market in Texas, even when subject to the uncertainty in real-time vehicle use. The model also showed that the V2G system operator would have a significant impact on prices, and thus as the number of PEVs participating in a V2G program in a given region increased, per-vehicle revenues, and thus compensation provided to vehicle owners, would decline dramatically. From these estimated payments to PEV owners, the decision to participate in a V2G program was analyzed. The balance between the estimated payments to PEV owners for participating in a V2G program and the increased probability of being left with a depleted battery as a result of V2G operations indicate that an owner of a range-limited battery electric vehicle (BEV) would probably not be a viable candidate for joining a V2G program, while a plug-in hybrid electric vehicle (PHEV) owner might find a V2G program worthwhile. Even for a PHEV owner, however, compensation for participating in a V2G program will provide limited incentive to join.Item Automated and Optimized Project Scheduling Using BIM(2014-04-04) Faghihi, VahidConstruction project scheduling is one of the most important tools for project managers in the Architecture, Engineering, and Construction (AEC) industry. The Construction schedules allow project managers to track and manage the time, cost, and quality (i.e. Project Management Triangle) of projects. Developing project schedules is almost always troublesome, since it is heavily dependent on project planners? knowledge of work packages, on-the-job-experience, planning capability, and oversight. Having a thorough understanding of the project geometries and their internal interacting stability relations plays a significant role in generating practical construction sequencing. On the other hand, the new concept of embedding all the project information into a three-dimensional (3D) representation of a project (a.k.a. Building Information Model or BIM) has recently drawn the attention of the construction industry. In this dissertation, the author demonstrates how to develop and extend the usage of the Genetic Algorithm (GA) not only to generate construction schedules, but to optimize the outcome for different objectives (i.e. cost, time, and job-site movements). The basis for the GA calculations is the embedded data available in BIM of the project that should be provided as an input to the algorithm. By reading through the geometry information in the 3D model and receiving more specific information about the project and its resources from the user, the algorithm generates different construction schedules. The output Pareto Frontier graphs, 4D animations, and schedule wellness scores will help the user to find the most suitable construction schedule for the given project.Item Automated estimation of time and cost for determining optimal machining plans(2012-05) Van Blarigan, Benjamin; Campbell, Matthew I.; Li, WeiThe process of taking a solid model and producing a machined part requires the time and skillset of a range of professionals, and several hours of part review, process planning, and production. Much of this time is spent creating a methodical step-by-step process plan for creating the part from stock. The work presented here is part of a software package that performs automated process planning for a solid model. This software is capable of not only greatly decreasing the planning time for part production, but also give valuable feedback about the part to the designer, as a time and cost associated with manufacturing the part. In order to generate these parameters, we must simulate all aspects of creating the part. Presented here are models that replicate these aspects. For milling, an automatic tool selection method is presented. Given this tooling, another model uses specific information about the part to generate a tool path length. A machining simulation model calculates relevant parameters, and estimates a time for machining given the tool and tool path determined previously. This time value, along with the machining parameters, is used to estimate the wear to the tooling used in the process. Using the machining time and the tool wear a cost for the process can be determined. Other models capture the time of non-machining production times, and all times are combined with billing rates of machines and operators to present an overall cost for machining a feature on a part. If several such features are required to create the part, these models are applied to each feature, until a complete process plan has been created. Further post processing of the process plan is required. Using a list of available machines, this work considers creating the part on all machines, or any combination of these machines. Candidates for creating the part on specific machines are generated and filtered based on time and cost to keep only the best candidates. These candidates can be returned to the user, who can evaluate, and choose, one candidate. Results are presented for several example parts.Item Automated Synthesis Tool for Design Optimization of Power Electronic Converters(2013-01-09) Mirjafari, MehranDesigners of power electronic converters usually face the challenge of having multiple performance indices that must be simultaneously optimized, such as maximizing efficiency while minimizing mass or maximizing reliability while minimizing cost. The experienced engineer applies his or her judgment to reduce the number of possible designs to a manageable number of feasible designs for which to prototype and test; thus, the optimality of this design-space reduction is directly dependent upon the experience, and expertise and biases of the designer. The practitioner is familiar with tradeoff analysis; however, simple tradeoff studies can become difficult or even intractable if multiple metrics are considered. Hence a scientific and systematic approach is needed. In this dissertation, a multi-objective optimization framework is presented as a design tool. Optimization of power electronic converters is certainly not a new subject. However, when limited to off-the-shelf components, the resulting system is really optimized only over the set of commercially available components, which may represent only a subset of the design space; the reachable space limited by available components and technologies. While this approach is suited to cost-reduce an existing design, it offers little insight into design possibilities for greenfield projects. Instead, this work uses the Technology Characterization Methods (TCM) to broaden the reachable design space by considering fundamental component attributes. The result is the specification for the components that create the optimal design rather than an evaluation of an apriori selected set of candidate components. A unique outcome of this approach is that new technology development vectors may emerge to develop optimized components for the optimized power converter. The approach presented in this work uses a mathematical descriptive language to abstract the characteristics and attributes of the components used in a power electronic converter in a way suitable for multi-objective and constrained optimization methods. This dissertation will use Technology Characterization Methods (TCM) to bridge the gap between high-level performance attributes and low-level design attributes where direct relationship between these two does not currently exist. The loss and size models for inductors, capacitors, IGBTs, MOSFETs and heat sinks will be used to form objective functions for the multi-objective optimization problem. A single phase IGBT-based inverter is optimized for efficiency and volume based on the component models derived using TCM. Comparing the obtained designs to a design, which can be made from commercial off-the-shelf components, shows that converter design can be optimized beyond what is possible from using only off-the-shelf components. A module-integrated photovoltaic inverter is also optimized for efficiency, volume and reliability. An actual converter is constructed using commercial off-the-shelf components. The converter design is chosen as close as possible to a point obtained by optimization. Experimental results show that the converter modeling is accurate. A new approach for evaluation of efficiency in photovoltaic converter is also proposed and the front-end portion of a photovoltaic converter is optimized for this efficiency, as well as reliability and volume.Item Border Crossing Modeling and Analysis: A Non-Stationary Dynamic Reallocation Methodology For Terminating Queueing Systems(2012-10-19) Moya, HiramThe United States international land boundary is a volatile, security intense area. In 2010, the combined trade was $918 billion within North American nations, with 80% transported by commercial trucks. Over 50 million commercial vehicles cross the Texas/Mexico border every year, not including private vehicles and pedestrian traffic, between Brownsville and El Paso, Texas, through one of over 25 major border crossings called "ports of entry" (POE). Recently, securing our southwest border from terrorist interventions, undocumented immigrants, and the illegal flow of drugs and guns has dominated the need to efficiently and effectively process people, goods and traffic. Increasing security and inspection requirements are seriously affecting transit times. Each POE is configured as a multi-commodity, prioritized queueing network which rarely, if ever, operates in steady-state. Therefore, the problem is about finding a balance between a reduction of wait time and its variance, POE operation costs, and the sustainment of a security level. The contribution of the dissertation is three-fold. The first uses queueing theory on the border crossing process to develop a methodology that decreases border wait times without increasing costs or affecting security procedures. The outcome is the development of the Dynamic Reallocation Methodology (DRM). Currently at the POE, inspection stations are fixed and can only inspect one truck type, FAST or Non-FAST program participant. The methodology proposes moveable servers that once a threshold is met, can be switched to service the other type of truck. Particular emphasis is given to inspection (service) times under time-varying arrivals (demands). The second contribution is an analytical model of the POE, to analyze the effects of the DRM. First assuming a Markovian service time, DRM benefits are evaluated. However, field data and other research suggest a general distribution for service time. Therefore, a Coxian k-phased approximation is implemented. The DRM is analyzed under this new baseline using expected number in the system, and cycle times. A variance reduction procedure is also proposed and evaluated under DRM. Results show that queue length and wait time is reduced 10 to 33% depending on load, while increasing FAST wait time by less than three minutes.Item Challenges and Solutions for Intrusion Detection in Wireless Mesh Networks(2014-05-03) Hassanzadeh, AminThe problem of intrusion detection in wireless mesh networks (WMN) is challenging, primarily because of lack of single vantage points where traffic can be analyzed and the limited resources available to participating nodes. Although the problem has received some attention from the research community, little is known about the tradeoffs among different objectives, such as high network performance, low energy consumption, and high security effectiveness. In this research, we show how accurate intrusion detection can be achieved in such resource constrained environments. The major challenges that hinder the performance of intrusion detection systems (IDS) in WMN are resources (e.g., energy, processing, and storage capabilities) accompanied by the adhoc-dynamic communication flows. In light of these challenges, we classify the proposed solutions into four classes: 1) Resourceless Traffic Aware (RL-TW) IDS, 2) Resourceless Traffic Agnostic (RLTG) IDS, 3) Resourceful Traffic Agnostic (RF-TG) IDS, and 4) Resourceful Traffic Aware (RF-TW) IDS. To achieve a desirable level of intrusion detection in WMN, we propose a research program encompassing five thrusts. First we show how traffic-awareness helps IDS solutions achieving high detection rates in resource-constrained WMN. Next, we propose two RL-TG (i.e., cooperative and non-cooperative) IDS solutions that can optimally monitor the entire WMN traffic without relying on WMN traffic information. The third (RF-TG) and fourth (RF-TW) IDS solutions propose energy-efficient monitoring mechanisms for intrusion detection in battery-powered WMN for traffic-agnostic and traffic-aware scenarios, respectively. We then investigate the Attack and Fault Tolerance of our proposed solutions and finally enumerate potential improvements and future works for our proposed solutions.Item Clock Distribution Network Optimization by Sequential Quadratic Programing(2010-07-14) Mekala, VenkataClock mesh is widely used in microprocessor designs for achieving low clock skew and high process variation tolerance. Clock mesh optimization is a very diffcult problem to solve because it has a highly connected structure and requires accurate delay models which are computationally expensive. Existing methods on clock network optimization are either restricted to clock trees, which are easy to be separated into smaller problems, or naive heuristics based on crude delay models. A clock mesh sizing algorithm, which is aimed to minimize total mesh wire area with consideration of clock skew constraints, has been proposed in this research work. This algorithm is a systematic solution search through rigorous Sequential Quadratic Programming (SQP). The SQP is guided by an efficient adjoint sensitivity analysis which has near-SPICE(Simulation Program for Integrated Circuits Emphasis)-level accuracy and faster-than-SPICE speed. Experimental results on various benchmark circuits indicate that this algorithm leads to substantial wire area reduction while maintaining low clock skew in the clock mesh. The reduction in mesh area achieved is about 33%.