Browsing by Subject "Discrete-time systems"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item A discrete optimal control approach to the solving of nonlinear equations.(Texas Tech University, 1974-08) Pan, Ching-TsaiNot availableItem Cumulant control of discrete time linear stochastic systems(Texas Tech University, 1979-05) Parten, Clifford RayTraditionally, feedback regulator control laws for linear stochastic systems have been achieved by minimization of the mathematical expectation of a particular quadratic performance measure. Over the past decade some research efforts have been directed at the generalization of this class of problems to include higher-order statistical indices, involving mean-conditional-cumulants of the quadratic performance measure. However, these efforts, which were made in the continuous time case, have not yielded a complete generalization. In this work such a generalization is achieved. The key question addressed involves enforcement of an "admissibility" constraint on the set of control actions over which optimization is carried out. This constraint assures that the control actions will be physically realizable and determined by feedback control laws. The main result of this work is the discovery of an equivalent optimization problem where the admissibility constraint is not present.Item Deterministic and stochastic discrete-time epidemics with spatial considerations(Texas Tech University, 1998-05) Burgin, Amy Marie BlackstockNot availableItem Discrete-time partially observed Markov decision processes: ergodic, adaptive, and safety control(2002) Hsu, Shun-pin; Arapostathis, Ari, 1954-In this dissertation we study stochastic control problems for systems modelled by discrete-time partially observed Markov decision processes. The issues we consider include ergodic control, adaptive control, and safety control. For ergodic control we propose a new condition that weakens the elegant interior accessibility assumption suggested recently. Using the standard procedure to transform the partially observed control problem to its completely observed equivalent, and then applying the vanishing discount method, we obtain Bellman’s ergodic optimality equation, which characterizes the optimal policy. We also provide an example to compare our assumption with those of previous work. When there are more than one decision maker in the system, we formulate our problem as a stochastic non-cooperative game where each decision maker seeks to minimize his or her own long-run average cost. A special class of systems with two decision makers and mixed observation structure is considered, and the existence of a Nash equilibrium for the policies is proved. In the study of adaptive control we extend settings of the ergodic control to the ones where the transition matrix is parameterized by a unknown vector. Motivated by notions of weak ergodicity, we propose a condition on the structure of the transition matrix that results in the ergodic behavior of the underlying controlled process. Under additional hypotheses, we show that the proposed adaptive policy is self-optimizing in appropriate sense. A new concept designated safety control is introduced in our work where the notion of safety is specified in terms of membership in a set called safe set. We study the choices of an appropriate policy (called safe policy) and an initial state probability distribution such that a safety request, which asks the state probability distribution of the system to lie in a given convex set at each time step, is met. Since the choice of a safe policy is not unique in general, we apply techniques of constrained Markov decision processes to find an optimal policy in appropriate sense among the candidates. We also develop an algorithm to find the largest set of initial state probability distributions corresponding to a given safe policy to meet the safety request. The algorithm is proved to terminate in finite steps under reasonable assumptions. Finally we investigate the safety control under partial observations. A machine replacement problem is studied in detail and numerical simulations are presented.Item On algebraic aspects of control(Texas Tech University, 1983-12) Bailey, Susan GruenhagenNot availableItem The discrete observability of the heat equation(Texas Tech University, 1987-05) Li, ZhuIn this thesis, the problem of discrete observability of the unforced heat equation is studied. It is explicitly shown that under certain conditions the observability of the heat equation is preserved by two spatial samples and an infinite set of discrete temporal samples. It is shown that the special matrix A = (aij)ooxoo where aij = exp{—i^tj) , tj^s are real and tj < tj+i for j = 1,2,3,...,I= 0,1,2, is a one to one linear operator on the space £°°. In the special case that tj = j \ the explicit form of the inverse of the principal nxn submatrix of the matrix A is calculated for all finite n. This provides a setting in which discrete observability can be verified by explicit estimates.Item The mathematics of interpolation and sampling(Texas Tech University, 1986-08) Smith, Jennifer KIn this thesis, continuous time, autonomous, observable dynamical systems are studied. The main problem considered is whether sampling at discrete times preserves observability. The discrete observability problem is shown to be equivalent to the general theory of linear interpolation. The mathematical theory used in this paper is Polya's property W which is used to produce several new results. In addition, the problem of discrete sampling is also interpreted as an n-point boundary value problem and as a problem of independence in the dual space.Item A time-centered split for implicit discretization of unsteady advection problems(2008-05) Fu, Shipeng, 1975-; Hodges, Ben R.Environmental flows (e.g. river and atmospheric flows) governed by the shallow water equations (SWE) are usually dominated by the advective mechanism over multiple time-scales. The combination of time dependency and nonlinear advection creates difficulties in the numerical solution of the SWE. A fully-implicit scheme is desirable because a relatively large time step may be used in a simulation. However, nonlinearity in a fully implicit method results in a system of nonlinear equations to be solved at each time step. To address this difficulty, a new method for implicit solution of unsteady nonlinear advection equations is developed in this research. This Time-Centered Split (TCS) method uses a nested application of the midpoint rule to computationally decouple advection terms in a temporally second-order accurate time-marching discretization. The method requires solution of only two sets of linear equations without an outer iteration, and is theoretically applicable to quadratically-nonlinear coupled equations for any number of variables. To explore its characteristics, the TCS algorithm is first applied to onedimensional problems and compared to the conventional nonlinear solution methods. The temporal accuracy and practical stability of the method is confirmed using these 1D examples. It is shown that TCS can computationally linearize unsteady nonlinear advection problems without either 1) outer iteration or 2) calculation of the Jacobian. A family of the TCS method is created in one general form by introducing weighting factors to different terms. We prove both analytically and by examples that the value of the weighting factors does not affect the order of accuracy of the scheme. In addition, the TCS method can not only computationally linearize but also decouple an equation system of coupled variables using special combinations of weighting factors. Hence, the TCS method provides flexibilities and efficiency in applications.Item Two-dimensional linear discrete systems: a polynomial fractional approach(Texas Tech University, 1988-08) Gapinski, Andrzej JThe purpose of this dissertation is two-fold. First, the class of two-dimensional linear time-invariant discrete system is investigated and a unified approach is proposed for its representation. This approach based on the two-dimensional polynomial fractional representation is further extended to two-dimensional, linear, time-varying discrete systems. This algebraic framework is established with use of the division process in K[z1,z2] which is defined and investigated. Also, a ring of generalized two-dimensional polynomials K{z1-,z2} with the division property is defined. The main structure of the proposed realization and control theory is based on a module of signals over a two-dimensional polynomial ring and a skew polynomial ring. The 2-D Kalman input-output map is defined and realization based on its factorization is considered. Also, various models for 2-D systems are considered for the time-invariant case. Secondly, system-theoretic properties such as reachability and observability are explored. The stability problem is considered. In a sequel the polynomial equation Q X + R Y = ö is explored. The conditions are specified for control of two-dimensional systems.