Browsing by Subject "multi-agent systems"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Cybernetic automata: An approach for the realization of economical cognition for multi-robot systems(2009-05-15) Mathai, Nebu JohnThe multi-agent robotics paradigm has attracted much attention due to the variety of pertinent applications that are well-served by the use of a multiplicity of agents (including space robotics, search and rescue, and mobile sensor networks). The use of this paradigm for most applications, however, demands economical, lightweight agent designs for reasons of longer operational life, lower economic cost, faster and easily-verified designs, etc. An important contributing factor to an agent?s cost is its control architecture. Due to the emergence of novel implementation technologies carrying the promise of economical implementation, we consider the development of a technology-independent specification for computational machinery. To that end, the use of cybernetics toolsets (control and dynamical systems theory) is appropriate, enabling a principled specifi- cation of robotic control architectures in mathematical terms that could be mapped directly to diverse implementation substrates. This dissertation, hence, addresses the problem of developing a technologyindependent specification for lightweight control architectures to enable robotic agents to serve in a multi-agent scheme. We present the principled design of static and dynamical regulators that elicit useful behaviors, and integrate these within an overall architecture for both single and multi-agent control. Since the use of control theory can be limited in unstructured environments, a major focus of the work is on the engineering of emergent behavior. The proposed scheme is highly decentralized, requiring only local sensing and no inter-agent communication. Beyond several simulation-based studies, we provide experimental results for a two-agent system, based on a custom implementation employing field-programmable gate arrays.Item Multi-robot Caravanning(2013-11-06) Mahadevan, AdityaWe study multi-robot caravanning, which is loosely defined as the problem of a heterogeneous team of robots visiting specific areas of an environment (waypoints) as a group. After formally defining this problem, we propose a novel solution that requires minimal communication and scales with the number of waypoints and robots. Our approach restricts explicit communication and coordination to occur only when robots reach waypoints, and relies on implicit coordination when moving between a given pair of waypoints. At the heart of our algorithm is the use of leader election to e?ciently exploit the unique environmental knowledge available to each robot in order to plan paths for the group, which makes it general enough to work with robots that have heterogeneous representations of the environment. We implement our approach both in simulation and on a physical platform, and characterize the performance of the approach under various scenarios. We demonstrate that our approach can successfully be used to combine the planning capabilities of di?erent agents.Item Roadmap-Based Techniques for Modeling Group Behaviors in Multi-Agent Systems(2012-07-16) Rodriguez, Samuel OscarSimulating large numbers of agents, performing complex behaviors in realistic environments is a difficult problem with applications in robotics, computer graphics and animation. A multi-agent system can be a useful tool for studying a range of situations in simulation in order to plan and train for actual events. Systems supporting such simulations can be used to study and train for emergency or disaster scenarios including search and rescue, civilian crowd control, evacuation of a building, and many other training situations. This work describes our approach to multi-agent systems which integrates a roadmap-based approach with agent-based systems for groups of agents performing a wide range of behaviors. The system that we have developed is highly customizable and allows us to study a variety of behaviors and scenarios. The system is tunable in the kinds of agents that can exist and parameters that describe the agents. The agents can have any number of behaviors which dictate how they react throughout a simulation. Aspects that are unique to our approach to multi-agent group behavior are the environmental encoding that the agents use when navigating and the extensive usage of the roadmap in our behavioral framework. Our roadmap-based approach can be utilized to encode both basic and very complex environments which include multi- level buildings, terrains and stadiums. In this work, we develop techniques to improve the simulation of multi-agent systems. The movement strategies we have developed can be used to validate agent movement in a simulated environment and evaluate building designs by varying portions of the environment to see the effect on pedestrian flow. The strategies we develop for searching and tracking improve the ability of agents within our roadmap-based framework to clear areas and track agents in realistic environments. The application focus of this work is on pursuit-evasion and evacuation planning. In pursuit-evasion, one group of agents, the pursuers, attempts to find and capture another set of agents, the evaders. The evaders have a goal of avoiding the pursuers. In evacuation planning, the evacuating agents attempt to find valid paths through potentially complex environments to a safe goal location determined by their environmental knowledge. Another group of agents, the directors may attempt to guide the evacuating agents. These applications require the behaviors created to be tunable to a range of scenarios so they can reflect real-world reactions by agents. They also potentially require interaction and coordination between agents in order to improve the realism of the scenario being studied. These applications illustrate the scalability of our system in terms of the number of agents that can be supported, the kinds of realistic environments that can be handled, and behaviors that can be simulated.Item Scaling reinforcement learning to the unconstrained multi-agent domain(2009-06-02) Palmer, VictorReinforcement learning is a machine learning technique designed to mimic the way animals learn by receiving rewards and punishment. It is designed to train intelligent agents when very little is known about the agent?s environment, and consequently the agent?s designer is unable to hand-craft an appropriate policy. Using reinforcement learning, the agent?s designer can merely give reward to the agent when it does something right, and the algorithm will craft an appropriate policy automatically. In many situations it is desirable to use this technique to train systems of agents (for example, to train robots to play RoboCup soccer in a coordinated fashion). Unfortunately, several significant computational issues occur when using this technique to train systems of agents. This dissertation introduces a suite of techniques that overcome many of these difficulties in various common situations. First, we show how multi-agent reinforcement learning can be made more tractable by forming coalitions out of the agents, and training each coalition separately. Coalitions are formed by using information-theoretic techniques, and we find that by using a coalition-based approach, the computational complexity of reinforcement-learning can be made linear in the total system agent count. Next we look at ways to integrate domain knowledge into the reinforcement learning process, and how this can signifi-cantly improve the policy quality in multi-agent situations. Specifically, we find that integrating domain knowledge into a reinforcement learning process can overcome training data deficiencies and allow the learner to converge to acceptable solutions when lack of training data would have prevented such convergence without domain knowledge. We then show how to train policies over continuous action spaces, which can reduce problem complexity for domains that require continuous action spaces (analog controllers) by eliminating the need to finely discretize the action space. Finally, we look at ways to perform reinforcement learning on modern GPUs and show how by doing this we can tackle significantly larger problems. We find that by offloading some of the RL computation to the GPU, we can achieve almost a 4.5 speedup factor in the total training process.