Browsing by Subject "Reinforcement Learning"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Development and evaluation of an arterial adaptive traffic signal control system using reinforcement learning(2009-05-15) Xie, YuanchangThis dissertation develops and evaluates a new adaptive traffic signal control system for arterials. This control system is based on reinforcement learning, which is an important research area in distributed artificial intelligence and has been extensively used in many applications including real-time control. In this dissertation, a systematic comparison between the reinforcement learning control methods and existing adaptive traffic control methods is first presented from the theoretical perspective. This comparison shows both the connections between them and the benefits of using reinforcement learning. A Neural-Fuzzy Actor-Critic Reinforcement Learning (NFACRL) method is then introduced for traffic signal control. NFACRL integrates fuzzy logic and neural networks into reinforcement learning and can better handle the curse of dimensionality and generalization problems associated with ordinary reinforcement learning methods. This NFACRL method is first applied to isolated intersection control. Two different implementation schemes are considered. The first scheme uses a fixed phase sequence and variable cycle length, while the second one optimizes phase sequence in real time and is not constrained to the concept of cycle. Both schemes are further extended for arterial control, with each intersection being controlled by one NFACRL controller. Different strategies used for coordinating reinforcement learning controllers are reviewed, and a simple but robust method is adopted for coordinating traffic signals along the arterial. The proposed NFACRL control system is tested at both isolated intersection and arterial levels based on VISSIM simulation. The testing is conducted under different traffic volume scenarios using real-world traffic data collected during morning, noon, and afternoon peak periods. The performance of the NFACRL control system is compared with that of the optimized pre-timed and actuated control. Testing results based on VISSIM simulation show that the proposed NFACRL control has very promising performance. It outperforms optimized pre-timed and actuated control in most cases for both isolated intersection and arterial control. At the end of this dissertation, issues on how to further improve the NFACRL method and implement it in real world are discussed.Item Optimal Control of Perimeter Patrol Using Reinforcement Learning(2011-08-08) Walton, ZacharyUnmanned Aerial Vehicles (UAVs) are being used more frequently in surveillance scenarios for both civilian and military applications. One such application addresses a UAV patrolling a perimeter, where certain stations can receive alerts at random intervals. Once the UAV arrives at an alert site it can take two actions: 1. Loiter and gain information about the site. 2. Move on around the perimeter. The information that is gained is transmitted to an operator to allow him to classify the alert. The information is a function of the amount of time the UAV is at the alert site, also called the dwell time, and the maximum delay. The goal of the optimization is to classify the alert so as to maximize the expected discounted information gained by the UAV's actions at a station about an alert. This optimization problem can be readily solved using Dynamic Programming. Even though this approach generates feasible solutions, there are reasons to experiment with different approaches. A complication for Dynamic Programming arises when the perimeter patrol problem is expanded. This is that the number of states increases rapidly when one adds additional stations, nodes, or UAVs to the perimeter. This in effect greatly increases the computation time making the determination of the solution intractable. The following attempts to alleviate this problem by implementing a Reinforcement Learning technique to obtain the optimal solution, more specifically Q-Learning. Reinforcement Learning is a simulation-based version of Dynamic Programming and requires lesser information to compute sub-optimal solutions. The effectiveness of the policies generated using Reinforcement Learning for the perimeter patrol problem have been corroborated numerically in this thesis.Item Reinforcement Learning Control with Approximation of Time-Dependent Agent Dynamics(2013-04-30) Kirkpatrick, KentonReinforcement Learning has received a lot of attention over the years for systems ranging from static game playing to dynamic system control. Using Reinforcement Learning for control of dynamical systems provides the benefit of learning a control policy without needing a model of the dynamics. This opens the possibility of controlling systems for which the dynamics are unknown, but Reinforcement Learning methods like Q-learning do not explicitly account for time. In dynamical systems, time-dependent characteristics can have a significant effect on the control of the system, so it is necessary to account for system time dynamics while not having to rely on a predetermined model for the system. In this dissertation, algorithms are investigated for expanding the Q-learning algorithm to account for the learning of sampling rates and dynamics approximations. For determining a proper sampling rate, it is desired to find the largest sample time that still allows the learning agent to control the system to goal achievement. An algorithm called Sampled-Data Q-learning is introduced for determining both this sample time and the control policy associated with that sampling rate. Results show that the algorithm is capable of achieving a desired sampling rate that allows for system control while not sampling ?as fast as possible?. Determining an approximation of an agent?s dynamics can be beneficial for the control of hierarchical multiagent systems by allowing a high-level supervisor to use the dynamics approximations for task allocation decisions. To this end, algorithms are investigated for learning first- and second-order dynamics approximations. These algorithms are respectively called First-Order Dynamics Learning and Second-Order Dynamics Learning. The dynamics learning algorithms are evaluated on several examples that show their capability to learn accurate approximations of state dynamics. All of these algorithms are then evaluated on hierarchical multiagent systems for determining task allocation. The results show that the algorithms successfully determine appropriated sample times and accurate dynamics approximations for the agents investigated.Item Reinforcement Learning for Active Length Control and Hysteresis Characterization of Shape Memory Alloys(2010-01-16) Kirkpatrick, Kenton C.Shape Memory Alloy actuators can be used for morphing, or shape change, by controlling their temperature, which is effectively done by applying a voltage difference across their length. Control of these actuators requires determination of the relationship between voltage and strain so that an input-output map can be developed. In this research, a computer simulation uses a hyperbolic tangent curve to simulate the hysteresis behavior of a virtual Shape Memory Alloy wire in temperature-strain space, and uses a Reinforcement Learning algorithm called Sarsa to learn a near-optimal control policy and map the hysteretic region. The algorithm developed in simulation is then applied to an experimental apparatus where a Shape Memory Alloy wire is characterized in temperature-strain space. This algorithm is then modified so that the learning is done in voltage-strain space. This allows for the learning of a control policy that can provide a direct input-output mapping of voltage to position for a real wire. This research was successful in achieving its objectives. In the simulation phase, the Reinforcement Learning algorithm proved to be capable of controlling a virtual Shape Memory Alloy wire by determining an accurate input-output map of temperature to strain. The virtual model used was also shown to be accurate for characterizing Shape Memory Alloy hysteresis by validating it through comparison to the commonly used modified Preisach model. The validated algorithm was successfully applied to an experimental apparatus, in which both major and minor hysteresis loops were learned in temperature-strain space. Finally, the modified algorithm was able to learn the control policy in voltage-strain space with the capability of achieving all learned goal states within a tolerance of +-0.5% strain, or +-0.65mm. This policy provides the capability of achieving any learned goal when starting from any initial strain state. This research has validated that Reinforcement Learning is capable of determining a control policy for Shape Memory Alloy crystal phase transformations, and will open the door for research into the development of length controllable Shape Memory Alloy actuators.