Browsing by Subject "CR-prolog"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item A declarative framework for modeling multi-agent systems(2007-05) Gelfond, Gregory; Watson, Richard; Cooke, Daniel E.; Rushton, J. NelsonCurrent work in answer-set programming with regards to its application in the development of reasoning agents has centered around single-agent systems. A well established body of research showing its applicability towards such domains has been developed, describing a thorough methodology for their development upon a theoretical foundation. This work hopes to expand the applicability of this field to the realm of multi-agent domains. In this work we present a general framework for reasoning about cooperative multi-agent systems. We begin with an overview of the current framework for representing single-agent systems as well as the syntax and semantics of the logic programming language CR-Prolog. Once this baseline has been established, we extend the fundamental notion of an agent to facilitate communication via the introduction of special named sets of fluents known as requests. We then define the notions of an agent's local and global perspectives and their respective diagrams which serve as the theoretical foundation of this work. Once the general framework has been discussed, a motivating example of a simple multi-agent domain is presented. This example is used to develop a methodology for representing agents capable of reasoning in such domains using the logic programming language of CR-Prolog, together with an axiomatization of multi-agent communication. Finally a series of results detailing some fundamental properties of the framework are presented.Item Detecting suspicious input in intelligent systems using answer set programming(Texas Tech University, 2005-05) Gianoutsos, Nicholas; Gelfond, Michael; Rushton, J. Nelson; Watson, RichardWhen presented with bad information people tend to make bad decisions. Even a rational person is unable to consistently make good decisions when presented with unsound information. The same holds true for intelligent agents. If at any point an agent accepts bad information into his reasoning process, the soundness of his decision making ability will begin to corrode. The purpose of this work is to develop programming methods that give intelligent systems the ability to handle potentially false information in a reasonable manner. In this research, we propose methods for detecting unsound information, which we call outliers, and methods for detecting the sources of these outliers. An outlier is informally defined as an observation or any combination of observations that are outside the realm of plausibility of a given state of the environment. With such reasoning ability, an intelligent agent is capable of not only learning about his environment, but he is also capable of learning about the reliability of the sources reporting the information. Throughout this work we introduce programming methods that enable intelligent agents to detect outliers in input information, as well as, learn about the accuracy of the sources submitting information.Item Detecting suspicious input in intelligent systems using answer set programming(2005-05) Gianoutsos, Nicholas; Gelfond, Michael; Rushton, J. Nelson; Watson, RichardWhen presented with bad information people tend to make bad decisions. Even a rational person is unable to consistently make good decisions when presented with unsound information. The same holds true for intelligent agents. If at any point an agent accepts bad information into his reasoning process, the soundness of his decision making ability will begin to corrode. The purpose of this work is to develop programming methods that give intelligent systems the ability to handle potentially false information in a reasonable manner. In this research, we propose methods for detecting unsound information, which we call outliers, and methods for detecting the sources of these outliers. An outlier is informally defined as an observation or any combination of observations that are outside the realm of plausibility of a given state of the environment. With such reasoning ability, an intelligent agent is capable of not only learning about his environment, but he is also capable of learning about the reliability of the sources reporting the information. Throughout this work we introduce programming methods that enable intelligent agents to detect outliers in input information, as well as, learn about the accuracy of the sources submitting information.