Nonlinear H2/H∞ Constrained Feedback Control: A Practical Design Approach Using Neural Networks

Date

2007-08-23T01:56:10Z

Authors

Journal Title

Journal ISSN

Volume Title

Publisher

Electrical Engineering

Abstract

In this research, practical methods for the design of H2 and H∞ optimal state feedback controllers for constrained input systems are proposed. The dynamic programming principle is used along with special quasi-norms to derive the structure of both the saturated H2 and H∞ optimal controllers in feedback strategy form. The resulting Hamilton-Jacobi-Bellman (HJB) and Hamilton-Jacobi-Isaacs (HJI) equations are derived respectively. It is shown that introducing quasi-norms on the constrained input in the performance functional allows unconstrained minimization of the Hamiltonian of the corresponding optimal control problem. Moreover, it is shown how to obtain nearly optimal minimum-time and constrained state controllers by modifying the performance functional of the optimization problem. Policy iterations on the constrained input for both the H2 and H∞ cases are studied. It is shown that the resulting sequence of Lyapunov functions in the H2 case, cost functions in the H∞ case, converge uniformly to the value function of the associated optimal control problem that solves the corresponding Hamilton-Jacobi equation. The relation between policy iterations for the zero-sum game appearing in the H∞ optimal control and the theory of dissipative systems is studied. It is shown that policy iterations on the disturbance player solve the nonlinear bounded real lemma problem of the associated closed loop system. Moreover, the relation between the domain of validity of the game value function and the corresponding L2-gain is addressed through policy iterations. Neural networks are used along with the least-squares method to solve for the linear in the unknown differential equations resulting from policy iterations on the saturated control in the H2 case, and the saturated control and the disturbance in the H∞ case. The result is a neural network constrained feedback controller that has been tuned a priori offline with the training set selected using Monte Carlo methods from a prescribed region of the state space which falls within the region of asymptotic stability of an initial stabilizing control used to start the policy iterations. Finally, the obtained algorithms are applied to different examples including the Nonlinear Benchmark Problem to reveal the power of the proposed method.

Description

Keywords

Citation