Multiple-instance active learning with online labeling
MetadataShow full item record
Robots are designed to automate a variety of processes. However, partial observability and non-determinism in the real world make it expensive for robots to operate automatically. Therefore, in order to fulfill their goal, robots need to be trained using multi-modal sensory cues such as visual and verbal. In this thesis, we move towards incremental learning for robots using the minimum human intervention. The existing framework, called multiple instance active learning, uses a defined static set of unlabeled data to be queried. Our contribution is to make it happen for the trainer to query dynamic unlabeled data by proposing a new active learning method, called Bag Uncertainty. In Bag Uncertainty method, a query is asked when the robot is uncertain regarding a recently entered therefore unlabeled data. This method abet the robot to boost its knowledge regarding a particular object category with adding unseen aspects of it to the trained model. If the human feedback is available, it is answered right away; however, if no human feedback is available, the questions are stored and will be asked with respect to the level of their uncertainty. A query in accordance with an image with highest level of uncertainty will be given high priority to be asked. The queries are asked in a verbal format and the corresponding response is processed with lexical tools to extract verbal features. The particular instance is addressed by extracted feature vector and graphical measures. We show the experimental results by applying this algorithm on a set of natural scenes and object categories selected from a database known as IAPR TC-12 with altered feature vectors regarding each image and its instances.