Browsing by Subject "Artificial Intelligence"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item A practical method for proactive information exchange within multi-agent teams(Texas A&M University, 2004-11-15) Rozich, Ryan TimothyPsychological studies have shown that information exchange is a key component of effective teamwork. In addition to requesting information that they need for their tasks, members of effective teams often proactively forward information that they believe other teammates require to complete their tasks. We refer to this type of communication as proactive information exchange and the formalization and implementation of this is the subject of this thesis. The important question that we are trying to answer is: under normative conditions, what types of information needs can agent teammates extract from shared plans and how can they use these information needs to proactively forward information to teammates? In the following, we make two key claims about proactive information exchange: first, agents need to be aware of the information needs of their teammates and that these information needs can be inferred from shared plans; second, agents need to be able to model the beliefs of others in order to deliver this information efficiently. To demonstrate this, we have developed an algorithm named PIEX, which, for each agent on a team, reasonably approximates the information-needs of other team members, based on analysis of a shared team plan. This algorithm transforms a team plan into an individual plan by inserting coomunicative tasks in agents' individual plans to deliver information to those agents who need it. We will incorporate a previously developed architecture for multi-agent belief reasoning. In addition to this algorithm for proactive information exchange, we have developed a formal framework to both describe scenarios in which proactive information exchange takes place and to evaluate the quality of the communication events that agents running the PIEX algorithm generate. The contributions of this work are a formal and implemented algorithm for information exchange for maintaining a shared mental model and a framework for evaluating domains in which this type of information exchange is useful.Item Automated domain analysis and transfer learning in general game playing(2010-08) Kuhlmann, Gregory John; Stone, Peter, 1971-; Lifschitz, Vladimir; Mooney, Raymond J.; Porter, Bruce W.; Schaeffer, JonathanCreating programs that can play games such as chess, checkers, and backgammon, at a high level has long been a challenge and benchmark for AI. Computer game playing is arguably one of AI's biggest success stories. Several game playing systems developed in the past, such as Deep Blue, Chinook and TD-Gammon have demonstrated competitive play against the top human players. However, such systems are limited in that they play only one particular game and they typically must be supplied with game-specific knowledge. While their performance is impressive, it is difficult to determine if their success is due to generally applicable techniques or due to the human game analysis. A general game player is an agent capable of taking as input a description of a game's rules and proceeding to play without any subsequent human input. In doing so, the agent, rather than the human designer, is responsible for the domain analysis. Developing such a system requires the integration of several AI components, including theorem proving, feature discovery, heuristic search, and machine learning. In the general game playing scenario, the player agent is supplied with a game's rules in a formal language, prior to match play. This thesis contributes a collection of general methods for analyzing these game descriptions to improve performance. Prior work on automated domain analysis has focused on generating heuristic evaluation functions for use in search. The thesis builds upon this work by introducing a novel feature generation method. Also, I introduce a method for generating and comparing simple evaluation functions based on these features. I describe how more sophisticated evaluation functions can be generated through learning. Finally, this thesis demonstrates the utility of domain analysis in facilitating knowledge transfer between games for improved learning speed. The contributions are fully implemented with empirical results in the general game playing system.Item Development and Implementation of an Artificially Intelligent Search Algorithm for Sensor Fault Detection Using Neural Networks(Texas A&M University, 2004-09-30) Singh, HarkiratThis work is aimed towards the development of an artificially intelligent search algorithm used in conjunction with an Auto Associative Neural Network (AANN) to help locate and reconstruct faulty sensor inputs in control systems. The AANN can be trained to detect when sensors go faulty but the problem of locating the faulty sensor still remains. The search algorithm aids the AANN to help locate the faulty sensors and reconstruct their actual values. The algorithm uses domain specific heuristics based on the inherent behavior of the AANN to achieve its task. Common sensor errors such as drift, shift and random errors and the algorithms response to them have been studied. The issue of noise has also been investigated. These areas cover the first part of this work. The second part focuses on the development of a web interface that implements and displays the working of the algorithm. The interface allows any client on the World Wide Web to connect to the engineering software called MATLAB. The client can then simulate a drift, shift or random error using the graphical user interface and observe the response of the algorithm.Item Machine Learning Tools for Audio-Visual Transcriptions, Captions, and Text Analysis in Digital Libraries(Texas Digital Library, 2023-05-17) Hicks, WilliamRapid advances in inexpensive or free-to-use artificial intelligence and text-processing applications now make it possible for digital libraries to produce affordable, relatively high-quality text derivatives (captions, transcripts, subtitles, translations, etc.) of many audio-visual (AV) materials held in repositories and expose these materials to a wider audience than would otherwise be possible. While not perfect, recently released systems allow for outputs that often meet or exceed the accuracy of text-based OCR, and natural language processing on these outputs holds promise for generating metadata or performing other research-oriented tasks. Members of the UNT digital libraries team will discuss recent work they have explored in this area, comparing the quality of outputs, costs with other creation methods, resource commitments, and demonstrate other lessons learned along the way.Item Modding for Emergence: Using Cellular Automata, Randomness, and Influence Maps in the Source Game Engine(2012-02-14) Bertka, Benjamin TheodoreRecent advances in the field of educational technology have promoted the re-purposing of entertainment-oriented games and software for educational applications. This thesis extends a project developed at Texas A&M University called Room 309, a re-purposed modification of Valve Software?s Source Development Kit that models classroom scenarios to pre-service teachers. To further explore effectiveness in the area of re-playability, this work incorporates emergent game behaviors and environments using cellular automata, randomness, and influence maps within the existing nonemergent structure. By introducing these qualities game play is expected to become less predictable, thus increasing the effectiveness of Room 309 as a learning tool.Item Reinforcement Learning Control with Approximation of Time-Dependent Agent Dynamics(2013-04-30) Kirkpatrick, KentonReinforcement Learning has received a lot of attention over the years for systems ranging from static game playing to dynamic system control. Using Reinforcement Learning for control of dynamical systems provides the benefit of learning a control policy without needing a model of the dynamics. This opens the possibility of controlling systems for which the dynamics are unknown, but Reinforcement Learning methods like Q-learning do not explicitly account for time. In dynamical systems, time-dependent characteristics can have a significant effect on the control of the system, so it is necessary to account for system time dynamics while not having to rely on a predetermined model for the system. In this dissertation, algorithms are investigated for expanding the Q-learning algorithm to account for the learning of sampling rates and dynamics approximations. For determining a proper sampling rate, it is desired to find the largest sample time that still allows the learning agent to control the system to goal achievement. An algorithm called Sampled-Data Q-learning is introduced for determining both this sample time and the control policy associated with that sampling rate. Results show that the algorithm is capable of achieving a desired sampling rate that allows for system control while not sampling ?as fast as possible?. Determining an approximation of an agent?s dynamics can be beneficial for the control of hierarchical multiagent systems by allowing a high-level supervisor to use the dynamics approximations for task allocation decisions. To this end, algorithms are investigated for learning first- and second-order dynamics approximations. These algorithms are respectively called First-Order Dynamics Learning and Second-Order Dynamics Learning. The dynamics learning algorithms are evaluated on several examples that show their capability to learn accurate approximations of state dynamics. All of these algorithms are then evaluated on hierarchical multiagent systems for determining task allocation. The results show that the algorithms successfully determine appropriated sample times and accurate dynamics approximations for the agents investigated.