# Browsing by Subject "optimization"

Now showing 1 - 20 of 37

###### Results Per Page

###### Sort Options

Item A multiperiod optimization model to schedule large-scale petroleum development projects(2009-05-15) Husni, Mohammed HamzaShow more This dissertation solves an optimization problem in the area of scheduling large-scale petroleum development projects under several resources constraints. The dissertation focuses on the application of a metaheuristic search Genetic Algorithm (GA) in solving the problem. The GA is a global search method inspired by natural evolution. The method is widely applied to solve complex and sizable problems that are difficult to solve using exact optimization methods. A classical resource allocation problem in operations research known under Knapsack Problems (KP) is considered for the formulation of the problem. Motivation of the present work was initiated by certain petroleum development scheduling problem in which large-scale investment projects are to be selected subject to a number of resources constraints in several periods. The constraints may occur from limitations in various resources such as capital budgets, operating budgets, and drilling rigs. The model also accounts for a number of assumptions and business rules encountered in the application that motivated this work. The model uses an economic performance objective to maximize the sum of Net Present Value (NPV) of selected projects over a planning horizon subject to constraints involving discrete time dependent variables. Computational experiments of 30 projects illustrate the performance of the model. The application example is only illustrative of the model and does not reveal real data. A Greedy algorithm was first utilized to construct an initial estimate of the objective function. GA was implemented to improve the solution and investigate resources constraints and their effect on the assets value. The timing and order of investment decisions under constraints have the prominent effect on the economic performance of the assets. The application of an integrated optimization model provides means to maximize the financial value of the assets, efficiently allocate limited resources and to analyze more scheduling alternatives in less time.Show more Item Acquiring 3D Full-body Motion from Noisy and Ambiguous Input(2012-07-16) Lou, HuiShow more Natural human motion is highly demanded and widely used in a variety of applications such as video games and virtual realities. However, acquisition of full-body motion remains challenging because the system must be capable of accurately capturing a wide variety of human actions and does not require a considerable amount of time and skill to assemble. For instance, commercial optical motion capture systems such as Vicon can capture human motion with high accuracy and resolution while they often require post-processing by experts, which is time-consuming and costly. Microsoft Kinect, despite its high popularity and wide applications, does not provide accurate reconstruction of complex movements when significant occlusions occur. This dissertation explores two different approaches that accurately reconstruct full-body human motion from noisy and ambiguous input data captured by commercial motion capture devices. The first approach automatically generates high-quality human motion from noisy data obtained from commercial optical motion capture systems, eliminating the need for post-processing. The second approach accurately captures a wide variety of human motion even under significant occlusions by using color/depth data captured by a single Kinect camera. The common theme that underlies two approaches is the use of prior knowledge embedded in pre-recorded motion capture database to reduce the reconstruction ambiguity caused by noisy and ambiguous input and constrain the solution to lie in the natural motion space. More specifically, the first approach constructs a series of spatial-temporal filter bases from pre-captured human motion data and employs them along with robust statistics techniques to filter noisy motion data corrupted by noise/outliers. The second approach formulates the problem in a Maximum a Posterior (MAP) framework and generates the most likely pose which explains the observations as well as consistent with the patterns embedded in the pre-recorded motion capture database. We demonstrate the effectiveness of our approaches through extensive numerical evaluations on synthetic data and comparisons against results created by commercial motion capture systems. The first approach can effectively denoise a wide variety of noisy motion data, including walking, running, jumping and swimming while the second approach is shown to be capable of accurately reconstructing a wider range of motions compared with Microsoft Kinect.Show more Item Adaptive Resource Allocation for Statistical QoS Provisioning in Mobile Wireless Communications and Networks(2012-02-14) Du, QingheShow more Due to the highly-varying wireless channels over time, frequency, and space domains, statistical QoS provisioning, instead of deterministic QoS guarantees, has become a recognized feature in the next-generation wireless networks. In this dissertation, we study the adaptive wireless resource allocation problems for statistical QoS provisioning, such as guaranteeing the specified delay-bound violation probability, upper-bounding the average loss-rate, optimizing the average goodput/throughput, etc., in several typical types of mobile wireless networks. In the first part of this dissertation, we study the statistical QoS provisioning for mobile multicast through the adaptive resource allocations, where different multicast receivers attempt to receive the common messages from a single base-station sender over broadcast fading channels. Because of the heterogeneous fading across different multicast receivers, both instantaneously and statistically, how to design the efficient adaptive rate control and resource allocation for wireless multicast is a widely cited open problem. We first study the time-sharing based goodput-optimization problem for non-realtime multicast services. Then, to more comprehensively characterize the QoS provisioning problems for mobile multicast with diverse QoS requirements, we further integrate the statistical delay-QoS control techniques ? effective capacity theory, statistical loss-rate control, and information theory to propose a QoS-driven optimization framework. Applying this framework and solving for the corresponding optimization problem, we identify the optimal tradeoff among statistical delay-QoS requirements, sustainable traffic load, and the average loss rate through the adaptive resource allocations and queue management. Furthermore, we study the adaptive resource allocation problems for multi-layer video multicast to satisfy diverse statistical delay and loss QoS requirements over different video layers. In addition, we derive the efficient adaptive erasure-correction coding scheme for the packet-level multicast, where the erasure-correction code is dynamically constructed based on multicast receivers? packet-loss statuses, to achieve high error-control efficiency in mobile multicast networks. In the second part of this dissertation, we design the adaptive resource allocation schemes for QoS provisioning in unicast based wireless networks, with emphasis on statistical delay-QoS guarantees. First, we develop the QoS-driven time-slot and power allocation schemes for multi-user downlink transmissions (with independent messages) in cellular networks to maximize the delay-QoS-constrained sum system throughput. Second, we propose the delay-QoS-aware base-station selection schemes in distributed multiple-input-multiple-output systems. Third, we study the queueaware spectrum sensing in cognitive radio networks for statistical delay-QoS provisioning. Analyses and simulations are presented to show the advantages of our proposed schemes and the impact of delay-QoS requirements on adaptive resource allocations in various environments.Show more Item Application of a Constrained Optimization Technique to the Imaging of Heterogeneous Objects Using Diffusion Theory(2011-02-22) Sternat, Matthew RyanShow more The problem of inferring or reconstructing the material properties (cross sections) of a domain through noninvasive techniques, methods using only input and output at the domain boundary, is attempted using the governing laws of neutron diffusion theory as an optimization constraint. A standard Lagrangian was formed consisting of the objective function and the constraints to satisfy, which was minimized through optimization using a line search method. The chosen line search method was Newton's method with the Armijo algorithm applied for step length control. A Gaussian elimination procedure was applied to form the Schur complement of the system, which resulted in greater computational efficiency. In the one energy group and multi-group models, the limits of parameter reconstruction with respect to maximum reconstruction depth, resolution, and number of experiments were established. The maximum reconstruction depth for one-group absorption cross section or multi-group removal cross section were only approximately 6-7 characteristic lengths deep. After this reconstruction depth limit, features in the center of a domain begin to diminish independent of the number of experiments. When a small domain was considered and size held constant, the maximum reconstruction resolution for one group absorption or multi-group removal cross section is approximately one fourth of a characteristic length. When finer resolution then this is considered, there is simply not enough information to recover that many region's cross sections independent of number of experiments or flux to cross-section mesh refinement. When reconstructing fission cross sections, the one group case is identical to absorption so only the multi-group is considered, then the problem at hand becomes more ill-posed. A corresponding change in fission cross section from a change in boundary flux is much greater then change in removal cross section pushing convergence criteria to its limits. Due to a more ill-posed problem, the maximum reconstruction depth for multi-group fission cross sections is 5 characteristic lengths, which is significantly shorter than the removal limit. To better simulate actual detector readings, random signal noise and biased noise were added to the synthetic measured solutions produced by the forward models. The magnitude of this noise and biased noise is modified and a dependency of the maximum magnitude of this noise versus the size of a domain was established. As expected, the results showed that as a domain becomes larger its reconstruction ability is lowered which worsens upon the addition of noise and biased noise.Show more Item Automated Design Optimization of Synchronous Machines: Development and Application of a Generic Fitness Evaluation Framework(2014-12-15) Deshpande, Yateendra BalkrishnaShow more A rotating synchronous electric machine design can be described to its entirety by a combination of 17 to 24 discrete and continuous parameters pertaining the geometry, material selection, and electrical loading. Determining the performance attributes of a design often involves numerical solutions to thermal and magnetic equations. Stochastic optimization methods have proven effective for solving specific design problems in literature. A major challenge to design automation, however, is whether the design tool is versatile enough to solve design problems with different types of objectives and requirements. This work proposes a black-box approach in an attempt to encompass a wide variety of synchronous machine design problems. This approach attempts to enlist all possible attributes of interest (AoIs) to the end-user so that the design optimization problem can be framed by combination of such attributes only. The number of ways the end-user can input requirements is now defined and limited. Design problems are classified based on which of the AoI?s are constraints, objectives or design parameters. It is observed that regardless of the optimization problem definition, the evaluation of any design is based on a common set of physical and analytical models and empirical data. Problem definitions are derived based on black-box approach and efficient fitness evaluation algorithms are tailored to meet requirements of each problem definition. The proposed framework is implemented in Matlab/C++ environment encompassing different aspects of motor design. The framework is employed for designing synchronous machines for three applications where designs based on conventional motor construction did not meet all design requirements. The first design problem is to develop a novel bar-conductor tooth-wound stator technology for 1.2 kW in-wheel direct drive motor for an electric/hybrid-electric two wheeler (including practical implementation). The second design problem deals with a novel outer-rotor buried ferrite magnet geometry for a 1.2 kW in-wheel geared motor drive used in an electric/hybrid-electric two wheeler (including practical implementation). The third application involves design of an ultra-cost-effective and ultra-light-weight 1 kW aluminum conductor motor. Thus, the efficacy of automated design is demonstrated by harnessing the framework and algorithms for exploring new technologies applicable for three distinct design problems originated from practical applications.Show more Item Automated Vehicle Articulation and Animation: A Maxscript Approach(2011-02-22) Griffin, Christopher CoreyShow more This thesis presents an efficient, animation production-centric solution to the articulation and animation of computer generated automobiles for creating animations with a high degree of believability. The thesis has two main foci which include an automated and customizable articulation system for automobile models and a vehicle animation system that utilizes minimal simulation techniques. The primary contribution of this thesis is the definition of a computer graphics animation software program that utilizes simulation and key-frame methods for defining vehicle motion. There is an emphasis on maintaining efficiency to prevent long wait times during the animation process and allow for immediate interactivity. The program, when implemented, allows for animation of a vehicle with minimal input and setup. These automated tools could make animating an automobile, or multiple automobiles of varying form and dimensions much more efficient and believable in a film, animation, or game production environment.Show more Item Control and Optimization of Vapor Compression Cycles Using Recursive Least Squares Estimation(2012-10-19) Rani, AvinashShow more Vapor compression cycles are the primary method by which refrigeration and air-conditioning systems operate, and thus constitute a significant portion of commercial and residential building energy consumption. This thesis presents a data-driven approach to find the optimal operating conditions of a multi-evaporator system in order to minimize the energy consumption while meeting operational requirements such as constant cooling or constant evaporator outlet temperature. The experimental system used for controller evaluation is a custom built small-scale water chiller with three evaporators; each evaporator services a separate body of water, referred to as a cooling zone. The three evaporators are connected to a single condenser and variable speed compressor, and feature variable water flow and electronic expansion valves. The control problem lies in development of a control architecture that will minimize the energy consumed by the system without prior information about the system in the form of performance maps, or complex mathematical models. The control architecture explored in this thesis relies on the data collected by sensors alone to formulate a function for the power consumption of the system in terms of controlled variables, namely, condenser and evaporator pressures, using recursive least squares estimation. This cost function is then minimized to attain optimal set points for the pressures which are fed to local controllers.Show more Item Control Systems: New Approaches to Analysis and Design(2014-05-01) Mohsenizadeh, Daniel NShow more This dissertation deals with two open problems in control theory. The first problem concerns the synthesis of fixed structure controllers for Linear Time Invariant (LTI) systems. The problem of synthesizing fixed structure/order controllers has practical importance when simplicity, hardware limitations, or reliability in the implementation of a controller dictates a low order of stabilization. A new method is proposed to simplify the calculation of the set of fixed structure stabilizing controllers for any given plant. The method makes use of computational algebraic geometry techniques and sign-definite decomposition method. Although designing a stabilizing controller of a fixed structure is important, in many practical applications it is also desirable to control the transient response of the closed loop system. This dissertation proposes a novel approach to approximate the set of stabilizing Proportional-Integral-Derivative (PID) controllers guaranteeing transient response specifications. Such desirable set of PID controllers can be constructed upon an application of Widder's theorem and Markov-Lukacs representation of non-negative polynomials. The second problem explored in this dissertation handles the design and control of linear systems without requiring the knowledge of the mathematical model of the system and directly from a small set of measurements, processed appropriately. The traditional approach to deal with the analysis and control of complex systems has been to describe them mathematically with sets of algebraic or differential equations. The objective of the proposed approach is to determine the design variables directly from a small set of measurements. In particular, it will be shown that the functional dependency of any system variable on any set of system design parameters can be determined by a small number of measurements. Once the functional dependency is obtained, it can be used to extract the values of the design parameters.Show more Item Effective algorithms and protocols for wireless networking: a topological approach(Texas A&M University, 2008-10-10) Zhang, FenghuiShow more Much research has been done on wireless sensor networks. However, most protocols and algorithms for such networks are based on the ideal model Unit Disk Graph (UDG) model or do not assume any model. Furthermore, many results assume the knowledge of location information of the network. In practice, sensor networks often deviate from the UDG model significantly. It is not uncommon to observe stable long links that are more than five times longer than unstable short links in real wireless networks. A more general network model, the quasi unit-disk graph (quasi-UDG) model, captures much better the characteristics of wireless networks. However, the understanding of the properties of general quasi-UDGs has been very limited, which is impeding the design of key network protocols and algorithms. In this dissertation we study the properties for general wireless sensor networks and develop new topological/geometrical techniques for wireless sensor networking. We assume neither the ideal UDG model nor the location information of the nodes. Instead we work on the more general quasi-UDG model and focus on figuring out the relationship between the geometrical properties and the topological properties of wireless sensor networks. Based on such relationships we develop algorithms that can compute useful substructures (planar subnetworks, boundaries, etc.). We also present direct applications of the properties and substructures we constructed including routing, data storage, topology discovery, etc. We prove that wireless networks based on quasi-UDG model exhibit nice properties like separabilities, existences of constant stretch backbones, etc. We develop efficient algorithms that can obtain relatively dense planar subnetworks for wireless sensor networks. We also present efficient routing protocols and balanced data storage scheme that supports ranged queries. We present algorithmic results that can also be applied to other fields (e.g., information management). Based on divide and conquer and improved color coding technique, we develop algorithms for path, matching and packing problem that significantly improve previous best algorithms. We prove that it is unlikely for certain problems in operation science and information management to have any relatively effective algorithm or approximation algorithm for them.Show more Item Facility Siting and Layout Optimization Based on Process Safety(2012-02-14) Jung, SeunghoShow more In this work, a new approach to optimize facility layout for toxic release, fire and explosion scenarios is presented. By integrating a risk analysis in the optimization formulation, safer assignments for facility layout and siting have been obtained. Accompanying with the economical concepts used in a plant layout, the new model considers the cost of willing to avoid a fatality, i.e. the potential injury cost due to accidents associated with toxic release near residential areas. For fire and explosion scenarios, the building or equipment damage cost replaces the potential injury cost. Two different approaches have been proposed to optimize the total cost related with layout. In the first phase using continuous-plane approach, the overall problem was initially modeled as a disjunctive program where the coordinates of each facility and cost-related variables are the main unknowns. Then, the convex hull approach was used to reformulate the problem as a Mixed Integer Non-Linear Program (MINLP) that identifies potential layouts by minimizing overall costs. This approach gives the coordinates of each facility in a continuous plane, and estimates for the total length of pipes, the land area, and the selection of safety devices. Finally, the 3D-computational fluid dynamics (CFD) was used to compare the difference between the initial layout and the final layout in order to see how obstacles and separation distances affect the dispersion or overpressures of affected facilities. One of the CFD programs, ANSYS CFX was employed for the dispersion study and Flame Acceleration Simulator (FLACS) for the fires and explosions. In the second phase for fire and explosion scenarios, the study is focused on finding an optimal placement for hazardous facilities and other process plant buildings using the optimization theory and mapping risks on the given land in order to calculate risk in financial terms. The given land is divided in a square grid of which the sides have a certain size and in which each square acquires a risk-score. These risk-scores such as the probability of structural damage are to be multiplied by prices of potential facilities which would be built on the grid. Finally this will give us the financial risk. Accompanying the suggested safety concepts, the new model takes into account construction and operational costs. The overall cost of locations is a function of piping cost, management cost, protection device cost, and financial risk. This approach gives the coordinates of the best location of each facility in a 2-D plane, and estimates the total piping length. Once the final layout is obtained, the CFD code, FLACS is used to simulate and consider obstacle effects in 3-D space. The outcome of this study will be useful in assisting the selection of location for process plant buildings and risk management.Show more Item Genetic Algorithm Based Damage Control For Shipboard Power Systems(2010-07-14) Amba, TusharShow more The work presented in this thesis was concerned with the implementation of a damage control method for U.S. Navy shipboard power systems (SPS). In recent years, the Navy has been seeking an automated damage control and power system management approach for future reconfigurable shipboard power systems. The methodology should be capable of representing the dynamic performance (differential algebraic description), the steady state performance (algebraic description), and the system reconfiguration routines (discrete events) in one comprehensive tool. The damage control approach should also be able to improve survivability, reliability, and security, as well as reduce manning through the automation of the reconfiguration of the SPS network. To this end, this work implemented a damage control method for a notional Next Generation Integrated Power System. This thesis presents a static implementation of a dynamic formulation of a new damage control method at the DC zonal Integrated Flight Through Power system level. The proposed method used a constrained binary genetic algorithm to find an optimal network configuration. An optimal network configuration is a configuration which restores all of the de-energized loads that are possible to be restored based on the priority of the load without violating the system operating constraints. System operating limits act as constraints in the static damage control implementation. Off-line studies were conducted using an example power system modeled in PSCAD, an electromagnetic time domain transient simulation environment and study tool, to evaluate the effectiveness of the damage control method in restoring the power system. The simulation results for case studies showed that, in approximately 93% of the cases, the proposed damage algorithm was able to find the optimal network configuration that restores the power system network without violating the power system operating constraints.Show more Item Horizontal Well Placement Optimization in Gas Reservoirs Using Genetic Algorithms(2011-08-08) Gibbs, Trevor HowardShow more Horizontal well placement determination within a reservoir is a significant and difficult step in the reservoir development process. Determining the optimal well location is a complex problem involving many factors including geological considerations, reservoir and fluid properties, economic costs, lateral direction, and technical ability. The most thorough approach to this problem is that of an exhaustive search, in which a simulation is run for every conceivable well position in the reservoir. Although thorough and accurate, this approach is typically not used in real world applications due to the time constraints from the excessive number of simulations. This project suggests the use of a genetic algorithm applied to the horizontal well placement problem in a gas reservoir to reduce the required number of simulations. This research aims to first determine if well placement optimization is even necessary in a gas reservoir, and if so, to determine the benefit of optimization. Performance of the genetic algorithm was analyzed through five different case scenarios, one involving a vertical well and four involving horizontal wells. The genetic algorithm approach is used to evaluate the effect of well placement in heterogeneous and anisotropic reservoirs on reservoir recovery. The wells are constrained by surface gas rate and bottom-hole pressure for each case. This project's main new contribution is its application of using genetic algorithms to study the effect of well placement optimization in gas reservoirs. Two fundamental questions have been answered in this research. First, does well placement in a gas reservoir affect the reservoir performance? If so, what is an efficient method to find the optimal well location based on reservoir performance? The research provides evidence that well placement optimization is an important criterion during the reservoir development phase of a horizontal-well project in gas reservoirs, but it is less significant to vertical wells in a homogeneous reservoir. It is also shown that genetic algorithms are an extremely efficient and robust tool to find the optimal location.Show more Item Imaging Heterogeneous Objects Using Transport Theory and Newton's Method(2012-02-14) Fredette, NathanielShow more This thesis explores the inverse problem of optical tomography applied to two-dimensional heterogeneous domains. The neutral particle transport equation was used as the forward model to simulate how neutral particles stream through and interact within these heterogeneous domains. A constrained optimization technique that uses Newton's method served as the basis of the inverse problem. The capabilities and limitations of the presented method were explored through various two-dimensional domains. The major factors that influenced the ability of the optimization method to reconstruct the cross sections of these domains included the locations of the sources used to illuminate the domains, the number of separate experiments used in the reconstruction, the locations where measurements were collected, the optical thickness of the domain, the amount of signal noise and signal bias applied to the measurements, and the initial guess for the cross section distribution. All of these factors were explored for problems with and without scattering. Increasing the number of sources, measurements and experiments used in the reconstruction generally produced more successful reconstructions with less error. Using more sources, experiments and measurements also allowed for optically thicker domains to be reconstructed. The maximum optical thickness that could be reconstructed with this method was ten mean free paths for pure absorber domains and two mean free paths for domains with scattering. Applying signal noise and signal bias to the measured fluxes produced more error in the reconstructed image. Generally, Newton's method was more successful at reconstructing domains from an initial guess for the cross sections that was greater in magnitude than their true values than from an initial guess that was lower in magnitude.Show more Item Improving Distribution System Reliability Through Risk-base Doptimization of Fault Management and Improved Computer-based Fault Location(2013-11-07) Dong, YimaiShow more Utilities of distribution systems now are under the pressure of improving the reliability of power supply, not only from the urge to increase revenue, but also from requirements of their customers and the Independent Service Organization (ISO)?s regulation on power quality. Optimization in fault management tasks has the potential of improving system reliability by reducing the duration and scale of outages caused by faults through fast fault isolation and service restoration. The research reported by this dissertation aims at improving distribution system reliability through optimized fault management. Three questions are explored and answered: 1) how to establish the cause-and-effect relationship between fault management and system reliability; 2) how can individual fault management tasks benefit from the newly emerged smart grid technologies; and 3) how to improve the overall performance of fault management under new operation condition. Optimization of the fault management is done through minimizing a risk function representing system reliability. The improvement in system reliability is approached in following steps: 1) a risk function consists of distribution reliability indices is defined as the criterion for system reliability; 2) a new fault location method is proposed first that can accurately locate the faults with the assistance of voltage-sag-measurements from system-wide Intelligent Electronics Devices (IEDs); 3) the fault management task of field inspection is optimized using the risk function and the probability model of the true fault location established using results from fault location; 4) the decision making on the execution of during-fault service restoration is optimized through Monte Carlo simulation; 5) the optimized fault management is utilized in processing the faults and the improvement in system reliability is assessed by reduction of costs associated with these faults. The proposed optimization is demonstrated on a realistic distribution system. The stochastic model of faults in the system is built with consideration of normal and extreme weather conditions. Results show that the proposed optimization is capable of improving system reliability by reducing the mean and variance of outage cost calculated over the simulated years.Show more Item Initial guess and optimization strategies for multi-body space trajectories with application to free return trajectories to near-Earth asteroids(2014-08) Bradley, Nicholas Ethan; Russell, Ryan Paul, 1976-; Ocampo, CesarShow more This concept of calculating, optimizing, and utilizing a trajectory known as a ``Free Return Trajectory" to facilitate spacecraft rendezvous with Near-Earth Asteroids is presented in this dissertation. A Free Return Trajectory may be defined as a trajectory that begins and ends near the same point, relative to some central body, without performing any deterministic velocity maneuvers (i.e., no maneuvers are planned in a theoretical sense for the nominal mission to proceed). Free Return Trajectories have been utilized previously for other purposes in astrodynamics, but they have not been previously applied to the problem of Near-Earth Asteroid rendezvous. Presented here is a series of descriptions, algorithms, and results related to trajectory initial guess calculation and optimal trajectory convergence. First, Earth-centered Free Return Trajectories are described in a general manner, and these trajectories are classified into several families based on common characteristics. Next, these trajectories are used to automatically generate initial conditions in the three-body problem for the purpose of Near-Earth Asteroid rendezvous. For several bodies of interest, example initial conditions are automatically generated, and are subsequently converged, resulting in feasible, locally-optimal, round-trip trajectories to Near-Earth Asteroids utilizing Free Return Trajectories. Subsequently, a study is performed on using an unpowered flyby of the Moon to lower the overall DV cost for a nominal round-trip voyage to a Near-Earth Asteroid. Using the Moon is shown to appreciably decrease the overall mission cost. In creating the formulation and algorithms for the Lunar flyby problem, an initial guess routine for generic planetary and lunar flyby tours was developed. This continuation algorithm is presented next, and details a novel process by which ballistic trajectories in a simplistic two-body force model may be iteratively converged in progressively more realistic dynamical models until a final converged ballistic trajectory is found in a full-ephemeris, full-dynamics model. This procedure is useful for constructing interplanetary transfers and moon tours in a realistic dynamical framework; an interplanetary and an inter-moon example are both shown. To summarize, the material in this dissertation consists of: novel algorithms to compute Free Return Trajectories, and application of the concept to Near-Earth Asteroid rendezvous; demonstration of cost-savings by using a Lunar flyby; and a novel routine to transfer trajectories from a simplistic model to a more realistic dynamical representation.Show more Item Integrated Simulation and Optimization for Decision-Making under Uncertainty with Application to Healthcare(2014-11-26) Alvarado, MichelleShow more Many real applications require decision-making under uncertainty. These decisions occur at discrete points in time, influence future decisions, and have uncertainties that evolve over time. Mean-risk stochastic integer programming (SIP) is one optimization tool for decision problems involving uncertainty. However, it may be challenging to develop a closed-form objective for some problems. Consequently, simulation of the system performance under a combination of conditions becomes necessary. Discrete event system specification (DEVS) is a useful tool for simulation and evaluation, but simulation models do not naturally include a decision-making component. This dissertation develops a novel approach whereby simulation and optimization models interact and exchange information leading to solutions that adapt to changes in system data. The integrated simulation and optimization approach was applied to the scheduling of chemotherapy appointments in an outpatient oncology clinic. First, a simulation of oncology clinic operations, DEVS-CHEMO, was developed to evaluate system performance from the patient and managements perspectives. Four scheduling algorithms were developed for DEVS-CHEMO. Computational results showed that assigning patients to both chairs and nurses improved system performance by reducing appointment duration by 3%, reducing waiting time by 34%, and reducing nurse overtime by 4%. Second, a set of mean-risk SIP models, SIP-CHEMO, was developed to determine the start date and resource assignments for each new patients appointment schedule. SIP-CHEMO considers uncertainty in appointment duration, acuity levels, and resource availability. The SIP-CHEMO models utilize the expected excess and absolute semideviation mean-risk measures. The SIP-CHEMO models increased throughput by 1%, decreased waiting time by 41%, and decreased nurse overtime by 25% when compared to DEVS-CHEMOs scheduling algorithms. Finally, a new framework integrating DEVS and SIP, DEVS-SIP, was developed. The DEVS-CHEMO and SIP-CHEMO models were combined using the DEVS-SIP framework to create DEVS-SIP-CHEMO. Appointment schedules were determined using SIP-CHEMO and implemented in DEVS-CHEMO. If the system performance failed to meet predetermined stopping criteria, DEVS-CHEMO revised SIP-CHEMO and determined a new appointment schedule. Computational results showed that DEVS-SIP-CHEMO is preferred to using simulation or optimization alone. DEVSSIP-CHEMO held throughput within 1% and improved nurse overtime by 90% and waiting time by 36% when compared to SIP-CHEMO alone.Show more Item Mathematical Foundations and Algorithms for Clique Relaxations in Networks(2012-02-14) Pattillo, JeffreyShow more This dissertation establishes mathematical foundations for the properties exhibited by generalizations of cliques, as well as algorithms to find such objects in a network. Cliques are a model of an ideal group with roots in social network analysis. They have since found applications as a part of grouping mechanisms in computer vision, coding theory, experimental design, genomics, economics, and telecommunications among other fields. Because only groups with ideal properties form a clique, they are often too restrictive for identifying groups in many real-world networks. This motivated the introduction of clique relaxations that preserve some of the various defining properties of cliques in relaxed form. There are six clique relaxations that are the focus of this dissertation: s-clique, s-club, s-plex, k-core, quasi-clique, and k-connected subgraphs. Since cliques have found applications in so many fields, research into these clique relaxations has the potential to steer the course of much future research. The focus of this dissertation is on bringing organization and rigorous methodology to the formation and application of clique relaxations. We provide the first taxonomy focused on how the various clique relaxations relate on key structural properties demonstrated by groups. We also give a framework for how clique relaxations can be formed. This equips researchers with the ability to choose the appropriate clique relaxation for an application based on its structural properties, or, if an appropriate clique relaxation does not exist, form a new one. In addition to identifying the structural properties of the various clique relaxations, we identify properties and prove propositions that are important computationally. These assist in creating algorithms to find a clique relaxation quickly as it is immersed in a network. We give the first ever analysis of the computational complexity of finding the maximum quasi-clique in a graph. Such analysis identifies for researchers the appropriate set of computational tools to solve the maximum quasiclique problem. We further create a polynomial time algorithm for identifying large 2-cliques within unit disk graphs, a special class of graphs often arising in communication networks. We prove the algorithm to have a guaranteed 1=2-approximation ratio and finish with computational results.Show more Item Methodology for designing the fuzzy resolver for a radial distribution system fault locator(Texas A&M University, 2006-04-12) Li, JunShow more The Power System Automation Lab at Texas A&M University developed a fault location scheme that can be used for radial distribution systems. When a fault occurs, the scheme executes three stages. In the first stage, all data measurements and system information is gathered and processed into suitable formats. In the second stage, three fault location methods are used to assign possibility values to each line section of a feeder. In the last stage, a fuzzy resolver is used to aggregate the outputs of the three fault location methods and assign a final possibility value to each line section of a feeder. By aggregating the outputs of the three fault location methods, the fuzzy resolver aims to obtain a smaller subset of line sections as potential faulted sections than the individual fault location methods. Fuzzy aggregation operators are used to implement fuzzy resolvers. This dissertation reports on a methodology that was developed utilizing fuzzy aggregation operators in the fuzzy resolver. Three fuzzy aggregation operators, the min, OWA, and uninorm, and two objective functions were used to design the fuzzy resolver. The methodologies to design fuzzy resolvers with respect to a single objective function and with respect to two objective functions were presented. A detailed illustration of the design process was presented. Performance studies of designed fuzzy resolvers were also performed. In order to design and validate the fuzzy resolver methodology, data were needed. Due to the lack of real field data, simulating a distribution feeder was a feasible alternative to generate data. The IEEE 34 node test feeder was modeled. Time current characteristics (TCC) based protective devices were added to this feeder. Faults were simulated on this feeder to generate data. Based on the performance studies of designed fuzzy resolvers, the fuzzy resolver designed using the uninorm operator without weights is the first choice. For this fuzzy resolver, no optimal weights are needed. In addition, fuzzy resolvers using the min operator and OWA operator can be used to design fuzzy resolvers. For these two operators, the methodology for designing fuzzy resolvers with respect to two objective functions was the appropriate choice.Show more Item Modeling and Optimization of a Bioethanol Production Facility(2011-10-21) Gabriel, Kerron JudeShow more The primary objective of this work is to identify the optimal bioethanol production plant capacity and configuration based on currently available technology for all the processing sections involved. To effect this study, a systematic method is utilized which involves the development of a superstructure for the overall technology selection, process simulation and model regression of each processing step as well as equipment costing and overall economic evaluation. The developed optimization model is also designed to incorporate various biomass feedstocks as well as realistic maximum equipment sizing thereby ensuring pragmatism of the work. For this study, the criterion for optimization is minimum ethanol price. The secondary and more interesting aim of this work was to develop a systematic method for evaluating the economics of biomass storage due to seasonal availabilities. In essence, a mathematical model was developed to link seasonal availabilities with plant capacity with subsequent integration into the original model developed. Similarly, the criterion for optimization is minimum ethanol price. The results of this work reveal that the optimal bioethanol production plant capacity is ~2800 MT biomass/day utilizing Ammonia Fiber Explosion pretreatment technology and corn stover as the preferred biomass feedstock. This configuration provides a minimum ethanol price of $1.96/gal. Results also show that this optimal pretreatment choice has a relatively high sensitivity to chemical cost thereby increasing the risk of implementation. Secondary to this optimal selection was lime pretreatment using switchgrass which showed a fairly stable sensitivity to market chemical cost. For the storage economics evaluation, results indicated that biomass storage is not economical beyond a plant capacity of ~98 MMgal/yr with an average biomass shortage period of 3 months. The study also showed that for storage to be economical at all plant capacities, the storage scheme employed should be general open air land use with a corresponding biomass loss rate as defined in the study of 0.5 percent per month.Show more Item Modeling and Optimization of Matrix Acidizing in Horizontal Wells in Carbonate Reservoirs(2013-05-07) Tran, HauShow more In this study, the optimum conditions for wormhole propagation in horizontal well carbonate acidizing was investigated numerically using a horizontal well acidizing simulator. The factors that affect the optimum conditions are rock mineralogy, acid concentration, temperature and acid flux in the formation. The work concentrated on the investigation of the acid flux. Analytical equations for injection rate schedule for different wormhole models. In carbonate acidizing, the existence of the optimum injection rate for wormhole propagation has been confirmed by many researchers for highly reactive acid/rock systems in linear core-flood experiments. There is, however, no reliable technique to translate the laboratory results to the field applications. It has also been observed that for radial flow regime in field acidizing treatments, there is no single value of acid injection rate for the optimum wormhole propagation. In addition, the optimum conditions are more difficult to achieve in matrix acidizing long horizontal wells. Therefore, the most efficient acid stimulation is only achieved with continuously increasing acid injection rates to always maintain the wormhole generation at the tip of the wormhole at its optimum conditions. Examples of acid treatments with the increasing rate schedules were compared to those of the single optimum injection rate and the maximum allowable rate. The comparison study showed that the increasing rate treatments had the longest wormhole penetration and, therefore, the least negative skin factor for the same amount of acid injected into the formations. A parametric study was conducted for the parameters that have the most significant effects on the wormhole propagation conditions such as injected acid volume, horizontal well length, acid concentration, and reservoir heterogeneity. The results showed that the optimum injection rate per unit length increases with increasing injected acid volume. And it was constant for scenarios with different lateral lengths for a given system of rock/ acid and injected volume. The study also indicated that for higher acid concentration the optimum injection rate was lower. It does exist for heterogeneous permeability formations. Field treatment data for horizontal wells in Middle East carbonate reservoirs were also analyzed for the validation of the numerical acidizing simulator.Show more