Reinforcement Learning Based Strategies For Adaptive Wireless Sensor Network Management
MetadataShow full item record
In wireless sensor networks (WSN), resource-constrained nodes are expected to operate in highly dynamic and often unattended environments. WSN applications need to cope with such dynamicity and uncertainty intrinsic in sensor networks, while simultaneously trying to achieve efficient resource utilization. A middleware framework with support for autonomous, adaptive and distributed sensor management, can simplify development of such WSN applications. We present a reinforcement learning based WSN middleware framework to enable autonomous and adaptive applications with support for efficient resource management. The uniqueness of our framework lies in using a bottom-up approach where each sensor node is responsible for its resource allocation/task selection while ensuring optimization of system-wide parameters like total energy usage, network lifetime etc. The framework allows creation of a distributed and scalable system while meeting application's goal. In this dissertation, a Q-learning based scheme called DIRL (Distributed Independent Reinforcement Learning) is presented first. DIRL learns the utility of performing various tasks over time with mostly local information at nodes. DIRL uses these utility values along with application constraints for task management subject to optimal energy usage. DIRL scheme is extended to create a two-tier reinforcement learning based framework consisting of micro-learning and macro-learning. Micro-learning enables individual sensor nodes to self-schedule their tasks using local information allowing for a real-time adaptation as in DIRL. Macro-learning governs the micro-learners by setting their utility functions in order to steer the system towards application's optimization goal (e.g. maximize network lifetime etc). The effectiveness of our framework is exemplified by designing a tracking/surveillance application on top of it. Finally, results of simulation studies are presented that compares performance of our scheme against other existing approaches. In general for applications requiring autonomous adaptation, our two-tier reinforcement learning based scheme on average is about 50\% more efficient than micro-learning alone and many fold more efficient than traditional resource management schemes like static scheduling, while maintaining necessary accuracy/performance. Efficient data collection in sparse WSNs by special nodes called Mobile Data Collectors (MDCs) that visit sensor nodes is investigated. As contact times are not known a priori and in order to minimize energy consumption, the discovery of an incoming MDC by the static sensor node is a critical task. Discovery is challenging as MDCs participating in various applications exhibit different mobility patterns and hence requires unique design of discovery strategy for each application. In this context, an adaptive discovery strategy is proposed that exploits DIRL framework and can be effectively applied to various applications while minimizing energy consumption. The principal idea is to learn MDC's arrival pattern and tune sensor node's duty cycle accordingly. Through extensive simulation analysis, the energy efficiency and effectiveness of the proposed framework is demonstrated. Finally, design and evaluation of a complete and generalized middleware framework called DReL is presented with focus on distributed sensor management on top of our multi-layer reinforcement learning scheme. DReL incorporates mechanisms and communication paradigm for task, data and reward distributions. DReL provides an easy-to-use interface to application developers for creating customized applications with specific QoS and optimization requirements. Adequacy and efficiency of DReL is shown by developing few sample applications on top of it and evaluating those applications' performance.