Browsing by Subject "Mathematical optimization"
Now showing 1 - 20 of 40
Results Per Page
Sort Options
Item A class of nonparametric procedures in one-factor experiments(Texas Tech University, 1998-08) Bransom, Jonathan E.This thesis analyzes and develops an algorithm for the adaptive distribution-free procedure for testing ordered alternatives and multiple comparisons. in one-way analysis of variance, including treatment of ties and demonstrates the supremacy of these procedures over typical parametric procedures based on sample means and the well-known Wilcoxon nonparametric procedure based on ranks. Initial data classifies the underlying distribution by tail-weight and amount of skewness. The preliminary classification determines the tailoring of specific scores from where all the inferences are based. The adaptive procedure performs well for a wide range of distributions rather than performing with optimal properties for any of the particular distributions. The preliminary selection of an adaptive procedure should affect characteristics of the final inference. Testing a null hypothesis at a nominal significance level of a after selecting a model wiU frequently result in an overall significance level much greater than a. The model should be selected by determining which corresponding test will produce the largest observed significance level.Item A surrogate model approach to refinery-wide optimization(Texas Tech University, 2004-08) Slaback, Dale DThe techniques currently used to perform refinery-wide optimization can give results that are inconsistent with the overall objectives of the refinery. A full-scale nonlinear refinery-wide optimization approach can accurately predict the overall refinery optimum, but suffers from a large computational requirement. In this study, a surrogate model approach is applied to the refinery-wide optimization problem. The surrogate model approach to large-scale optimization involves building both detailed and approximate models for all of the processing units in the refinery. The detailed models developed in this study are rigorous first-principles models involving the material and energy balance equations. The surrogate model approach to large-scale optimization can employ approximate models of any form. However, selection of the proper form for the approximate models can greatly increase the efficiency of the optimization problem. In this study, fixed physical property phenomenological models are used as the approximate models. By fixing the values of the stream enthalpies and vapor-liquid equilibrium constants, the total number of equations in the process models is greatly reduced. This choice of approximate model form also guarantees that, at convergence, the results of the detailed and approximate models will be identical. The ASCEND IV modeling language is chosen for creating the detailed and approximate models in this study. This modeling platform provides significant advantages over a standard programming language such as Fortran. In addition to having a graphical user interface, the ASCEND software also contains an integrated solver and optimizer, making implementation of the optimization procedure more straightforward. By combining the models of each unit of the refinery together, a refinery-wide model is created. Using the CONOPT optimization routine in ASCEND, the refinery-wide optimization problem can be solved. The optimization results obtained in this work are consistent with the refinery-wide optimization results presented by Li (2000). For the refinery model created in this study, the surrogate model approach decreased the required solution time by nearly an order of magnitude. An optimization was also performed for a refinery in which some product was recycled back to the crude unit. By adding this recycle stream, the system of equations was made much more complex, with each unit being affected by all others. In this case, a dramatic reduction in the optimization solution time was also observed. The refinery model in this study contains 32 decision variables and 63 constraints. An industrial-scale refinery model would be much larger, perhaps including 150 decision variables. The solution time reduction using the surrogate model approach increases with the number of decision variables. Therefore, it is projected that the time reduction for an industrial-scale refinery model could be substantially larger than for the model used in this study. The speedup obtained using the surrogate model approach would decrease the solution time for refinery-wide optimization from several days to only a few hours. By decreasing the solution time, the surrogate model approach provides a method for performing refinery-wide optimization in an industrial setting.Item Adaptive critic designs and their applications(Texas Tech University, 1997-12) Prokhorov, Danil VNOT AVAILABLEItem Adaptive multiscale modeling of polymeric materials using goal-oriented error estimation, Arlequin coupling, and goals algorithms(2008-05) Bauman, Paul Thomas, 1980-; Oden, J. Tinsley (John Tinsley), 1936-Scientific theories that explain how physical systems behave are described by mathematical models which provide the basis for computer simulations of events that occur in the physical universe. These models, being only mathematical characterizations of actual phenomena, are obviously subject to error because of the inherent limitations of all mathematical abstractions. In this work, new theory and methodologies are developed to quantify such modeling error in a special way that resolves a fundamental and standing issue: multiscale modeling, the development of models of events that transcend many spatial and temporal scales. Specifically, we devise the machinery for a posteriori estimates of relative modeling error between a model of fine scale and another of coarser scale, and we use this methodology as a general approach to multiscale problems. The target application is one of critical importance to nanomanufacturing: imprint lithography of semiconductor devices. The development of numerical methods for multiscale modeling has become one of the most important areas of computational science. Technological developments in the manufacturing of semiconductors hinge upon the ability to understand physical phenomena from the nanoscale to the microscale and beyond. Predictive simulation tools are critical to the advancement of nanomanufacturing semiconductor devices. In principle, they can displace expensive experiments and testing and optimize the design of the manufacturing process. The development of such tools rest on the edge of contemporary methods and high-performance computing capabilities and is a major open problem in computational science. In this dissertation, a molecular model is used to simulate the deformation of polymeric materials used in the fabrication of semiconductor devices. Algorithms are described which lead to a complex molecular model of polymer materials designed to produce an etch barrier, a critical component in imprint lithography approaches to semiconductor manufacturing. Each application of this so-called polymerization process leads to one realization of a lattice-type model of the polymer, a molecular statics model of enormous size and complexity. This is referred to as the base model for analyzing the deformation of the etch barrier, a critical feature of the manufacturing process. To reduce the size and complexity of this model, a sequence of coarser surrogate models is generated. These surrogates are the multiscale models critical to the successful computer simulation of the entire manufacturing process. The surrogate involves a combination of particle models, the molecular model of the polymer, and a coarse-scale model of the polymer as a nonlinear hyperelastic material. Coefficients for the nonlinear elastic continuum model are determined using numerical experiments on representative volume elements of the polymer model. Furthermore, a simple model of initial strain is incorporated in the continuum equations to model the inherit shrinking of the A coupled particle and continuum model is constructed using a special algorithm designed to provide constraints on a region of overlap between the continuum and particle models. This coupled model is based on the so-called Arlequin method that was introduced in the context of coupling two continuum models with differing levels of discretization. It is shown that the Arlequin problem for the particle-tocontinuum model is well posed in a one-dimensional setting involving linear harmonic springs coupled with a linearly elastic continuum. Several numerical examples are presented. Numerical experiments in three dimensions are also discussed in which the polymer model is coupled to a nonlinear elastic continuum. Error estimates in local quantities of interest are constructed in order to estimate the modeling error due to the approximation of the particle model by the coupled multiscale surrogate model. The estimates of the error are computed by solving an auxiliary adjoint, or dual, problem that incorporates as data the quantity of interest or its derivatives. The solution of the adjoint problem indicates how the error in the approximation of the polymer model inferences the error in the quantity of interest. The error in the quantity of interest represents the relative error between the value of the quantity evaluated for the base model, a quantity typically unavailable or intractable, and the value of the quantity of interest provided by the multiscale surrogate model. To estimate the error in the quantity of interest, a theorem is employed that establishes that the error coincides with the value of the residual functional acting on the adjoint solution plus a higher-order remainder. For each surrogate in a sequence of surrogates generated, the residual functional acting on various approximations of the adjoint is computed. These error estimates are used to construct an adaptive algorithm whereby the model is adapted by supplying additional fine-scale data in certain subdomains in order to reduce the error in the quantity of interest. The adaptation algorithm involves partitioning the domain and selecting which subdomains are to use the particle model, the continuum model, and where the two overlap. When the algorithm identifies that a region contributes a relatively large amount to the error in the quantity of interest, it is scheduled for refinement by switching the model for that region to the particle model. Numerical experiments on several configurations representative of nano-features in semiconductor device fabrication demonstrate the effectiveness of the error estimate in controlling the modeling error as well as the ability of the adaptive algorithm to reduce the error in the quantity of interest. There are two major conclusions of this study: 1. an effective and well posed multiscale model that couples particle and continuum models can be constructed as a surrogate to molecular statics models of polymer networks and 2. an error estimate of the modeling error for such systems can be estimated with sufficient accuracy to provide the basis for very effective multiscale modeling procedures. The methodology developed in this study provides a general approach to multiscale modeling. The computational procedures, computer codes, and results could provide a powerful tool in understanding, designing, and optimizing an important class of semiconductormanufacturing processes. The study in this dissertation involves all three components of the CAM graduate program requirements: Area A, Applicable Mathematics; Area B, Numerical Analysis and Scientific Computation; and Area C, Mathematical Modeling and Applications. The multiscale modeling approach developed here is based on the construction of continuum surrogates and coupling them to molecular statics models of polymer as well as a posteriori estimates of error and their adaptive control. A detailed mathematical analysis is provided for the Arlequin method in the context of coupling particle and continuum models for a class of one-dimensional model problems. Algorithms are described and implemented that solve the adaptive, nonlinear problem proposed in the multiscale surrogate problem. Large scale, parallel computations for the base model are also shown. Finally, detailed studies of models relevant to applications to semiconductor manufacturing are presented.Item An adaptive tabu search approach to cutting and packing problems(2003) Harwig, John Michael; Barnes, J. Wesley.This research implements an adaptive tabu search procedure that controls the partitioning, ordering and orientation features of a two–dimensional orthogonal packing solution, details an effective fine-grained objective function for cutting and packing problems, and presents effective move neighborhoods to find very good answers in a short period of time. Results meet or exceed all other techniques presented in literature for a common test set for all 500 instances. These techniques have natural extension into both two dimensional arbitrarily shaped and three dimensional orthogonal packing heuristics that use rules based on given partitions, orders and orientations. Techniques to extend this research into those new horizons and methods that might improve the search are also presented.Item An advanced tabu search approach to the airlift loading problem(2006) Roesener, August G.; Barnes, J. WesleyThis dissertation details an algorithm to solve the Airlift Loading Problem (ALP). Given a set of cargo to be transported from an aerial port of embarkation to one or more aerial ports of debarkation, the ALP seeks to pack the cargo items onto pallets (if necessary), partition the set of cargo items into aircraft loads, select an efficient and effective set of aircraft from available aircraft, and to place the cargo in allowable positions on those aircraft. The ALP differs from most partitioning and packing problems described in the literature because, in addition to spatial constraints, factors such as allowable cabin load, balance, and temporal restrictions on cargo loading availability and cargo delivery requirements must be considered. While classical methods would be forced to attack such problems in a hierarchical fashion by solving a sequence of related subproblems, this research develops an algorithm to simultaneously solve the combined problem by employing an advanced tabu search approach.Item An efficient parallel implementation of a randomized global optimization algorithm(Texas Tech University, 2004-12) Cheleenahalli, Vijay NNot availableItem An exact branch and bound algorithm for the general quadratic assignment problem(Texas Tech University, 1988-12) Charnsethikul, PeerayuthThis research is concerned with the development of an exact algorithm for a general quadratic assignment problem (QAP), of which the Koopmans-Beckmann formulation, in the context of an analysis of the location of economic activities or facilities, is a special case. The algorithm is based on the linearization of a general QAP of size n into a linear assignment problem of size n(n-l)/2. The objective value and the dual solution of this subproblem are used to compute the lower bound used in an exact branch and bound procedure. Computational experience and comparisons to other well known methods are discussed.Item Branch-and-cut for piecewise linear optimization(2012-05) Rajat, Gupta; Farias, Ismael R. d.; Simonton, James L.; Matis, Timothy I.; Smith, Phillip; Zhang, YuanlinIn this research we report and analyze the results of our extensive testing of branch-and- cut for piecewise linear optimization using the cutting planes. We tested large instances for transshipment problem using MIP, LOG and SOS2 formulations. Besides analysis of the performance of the cuts, we also analyze the effect of formulation of the performance of branch-and-cut. These tests were conducted using callable libraries of CPLEX and GUROBI. Finally, we also analyzed the results of piecewise linear optimization problems with semi- continuous constraints.Item CAD for nanolithography and nanophotonics(2011-08) Ding, Duo; Pan, David Z.; Chen, Ray T.; Ghosh, Joydeep; Orshansky, Michael E.; Torres, J. Andres; Touba, NurAs the semiconductor technology roadmap further extends, the development of next generation silicon systems becomes critically challenged. On the one hand, design and manufacturing closures become much more difficult due to the widening gap between the increasing integration density and the limited manufacturing capability. As a result, manufacturability issues become more and more critically challenged in the design of reliable silicon systems. On the other hand, the continuous scaling of feature size imposes critical issues on traditional interconnect materials (Cu/Low-K dielectrics) due to power, delay and bandwidth concerns. As a result, multiple classes of new materials are under research and development for future generation technologies. In this dissertation, we investigate several critical Computer-Aided Design (CAD) challenges under advanced nanolithography and nanophotonics technologies. In addressing these challenges, we propose systematic CAD methodologies and optimization techniques to assist the design of high-yield and high-performance integrated circuits (IC) with low power consumption. In Very Large Scale Integration (VLSI) CAD for nanolithography, we study the manufacturing variability under resolution enhancement techniques (RETs) and explore two important topics: (1) fast and high fidelity lithography hotspot detection; (2) generic and efficient manufacturability aware physical design. For the first topic, we propose a number of CAD optimization and integration techniques to achieve the following goals in detecting lithography hotspots: (a) high hotspot detection accuracy; (b) low false-positive rate (hotspot false-alarms); (c) good capability to trade-off between detection accuracy and false-alarms; (d) fast CPU run-time; and (e) excellent layout coverage and computation scalability as design gets more complex. For the second topic, we explore the routing stage by incorporating post-RET manufacturability models into the mathematical formulation of a detailed router to achieve: (a) significantly reduced lithography-unfriendly patterns; (b) small CPU run-time overhead; and (c) formulation generality and compatibility to all types of RETs and evoling manufacturing conditions. In VLSI CAD for nanophotonics, we focus on three topics: (1) characterization and evaluation of standard on-chip nanophotonics devices; (2) low power planar routing for on-chip opto-electrically interconnected systems; (3) power-efficient and thermal-reliable design of nanophotonics Wavelength Division Multiplexing for ultra-high bandwidth on-chip communication. With simulations and experiments, we demonstrate the critical role and effectiveness of Computer-Aided Design techniques as the semiconductor industry marches forward in the deeper sub-micron (45nm and below) domain.Item Deterministic and stochastic discrete-time epidemics with spatial considerations(Texas Tech University, 1998-05) Burgin, Amy Marie BlackstockNot availableItem Deterministic approximations in stochastic programming with applications to a class of portfolio allocation problems(2001-08) Dokov, Steftcho Pentchev; Morton, David P.Optimal decision making under uncertainty involves modeling stochastic systems and developing solution methods for such models. The need to incorporate randomness in many practical decision-making problems is prompted by the uncertainties associated with today’s fast-paced technological environment. The complexity of the resulting models often exceeds the capabilities of commercially available optimization software, and special purpose solution techniques are required. Three main categories of solution approaches exist for attacking a particular stochastic programming instance. These are: large-scale mathematical programming algorithms, Monte-Carlo sampling-based techniques, and deterministically valid bound-based approximations. This research contributes to the last category. First, second-order lower and upper bounds are developed on the expectation of a convex function of a random vector. Here, a “second-order bound” means that only the first and second moments of the underlying random parameters are needed to compute the bound. The vector’s random components are assumed to be independent and to have bounded support contained in a hyper-rectangle. Applications to stochastic programming test problems and analysis of numerical performance are also presented. Second, assuming additional relevant moment information is available, higher-order upper bounds are developed. In this case the underlying random vector can have support contained in either a hyper-rectangle or a multidimensional simplex, and the random parameters can be either dependent or independent. The higher-order upper bounds form a decreasing sequence converging to the true expectation, and yielding convergence of the optimal decisions. Finally, applications of the higher-order upper bounds to a class of portfolio optimization problems are presented. Mean-variance and mean-varianceskewness efficient portfolio frontiers are considered in the context of a specific portfolio allocation model as well as in general and connected with applications of the higher-order upper bounds in utility theoryItem Enabling enterprise integration through architecture(Texas Tech University, 1997-12) Burg, William DaleFor firms to effectively compete in today's turbulent market environment, their supporting software systems must be able to provide new, effective system solutions in a timeframe necessary to enable the business change required to remain competitive. In order to provide flexible responsive information systems, may organizations are pursing the idea of building software using factory-like concepts. To develop a software factory, information systems professionals focus on building standardized production processes, components, and tools that could be reused across new system solutions. To date, these attempts have resulted in the ability to build domain specific applications, but these applications are limited in there capability to be extensible. Thus, the requirement for systems to rapidly adapt has not been met. One of the major reasons for these limited results has been the failure to design the software factory concept upon an appropriate paradigm. Using the mass customization paradigm, this research effort represents a conceptual step towards building new system solutions based upon these driving business needs by identifying the functional requirements for its use as a referent architecmral paradigm for an adaptive software factory. Using grounded theory, this exploratory research effort attempts to identify the functional requirements of the command, control, and communication mechanism of a mass customization based software factory by evaluating current research and development projects centered around the ideas of the software factory and component reuse. By grounding this research effort in the context in which the solution must apply, the formal propositions developed thorough this research effort will have a high degree of external consistency.Item Expert system to retrieve optimal kinetic parameters for simple reactions(Texas Tech University, 1986-05) Jou, Chon-shinNot availableItem Heuristic random optimization(Texas Tech University, 1998-05) Chandran, SandeepThis chapter provides an overview of the concept, methods and results of the various optimization techniques. Optimization is aimed towards maximizing or minimizing a measure of quality which is called the objective function. The objective function value is dependent on the values chosen for the decision variables and optimization seeks to find the values of the decision variables which result in the best (maximum or minimum) value for the objective function.Item Hierarchical systems implementation with microprocessor(Texas Tech University, 1978-05) Chiang, MingNot availableItem An idempotent-analytic ISS small gain theorem with applications to complex process models(2002) Potrykus, Henry George; Qin, S. JoeIn this dissertation a general nonlinear input-to-state stability small gain theory is developed using idempotent analytic techniques. The small gain theorem presented may be applied to system complexes, such as those arising in process modelling, and allows for the determination of a practical compact attractor in the system’s state space. Thus, application of the theorem reduces the analysis of the system to one semi-local in nature. In particular, physically practical bounds on the region of operation of a complex system may be deduced. The theorem is proved within the context of the idempotent semiring K ⊂ End⊕ 0 (R≥0). We also show that particular to linear and power law input-to-state disturbance gain functions the deduction of the resulting sufficient condition for input-to-state stability may be performed efficiently, using any suitable dynamic programming algorithm. We indicate, through examples, how an analysis of the (weighted, directed) graph of the system complex gives a computable means to delimit (in an easily understood form) robust input-to-state stability bounds. Applications of the theory to practical chemical engineering systems yielding novel results round out the work and conclude the main body of the dissertation.Item Investigating the use of tabu search to find near-optimal solutions in multiclassifier systems(2003) Korycinski, Donna Kay; Crawford, Melba M.; Barnes, J. Wesley.Item Majorization and pseudo-subordination of a class of analytic functions(Texas Tech University, 2000-08) Campos, RichardIn this thesis, we combine techniques used in Barnard and Kellogg's paper, Ruscheweyh's subordination result, (1.4), variational techniques, and geometric properties of Mobius transformations to verify the following theorem: Theorem 1.1. If F G K* and f ^w F in D with /'(0) > 0, then f < F' for z G F>i and the result is sharp.Item MATLAB GUI-based toolbox for the design of modulated perfect reconstruction filter banks(Texas Tech University, 2004-05) Zhao, JieThe aim of this study is to create a Graphical User Interface (GUI) toolbox. This Matlab based toolbox will provide a number of options for designing and implementing maximally decimated cosine-modulated perfect reconstruction (PR) filter banks. Using this toolbox, people can design their own filter banks without having to understand the details of the design procedure. In this thesis, the theory of and results from cosine-modulated filter banks are reviewed. The prototype filters have arbitrary lengths, and the overall delay of the filter bank is arbitrary within a fundamental range. Necessary and sufficient conditions for PR are presented using a polyphase representation. The design is formulated as a quadratic-constrained least-squares optimization problem, where the optimized parameters are the prototype filter coefficients. Design and description of the toolbox is provided in this thesis. Several design examples are given to illustrate the tradeoff between overall system delay and stopband attenuation.