Browsing by Subject "Learning theory"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Computational applications of invariance principles(2011-08) Meka, Raghu Vardhan Reddy; Zuckerman, David I.; Dhillon, Inderjit S; Gal, Anna; Gopalan, Parikshit; Klivans, AdamThis thesis focuses on applications of classical tools from probability theory and convex analysis such as limit theorems to problems in theoretical computer science, specifically to pseudorandomness and learning theory. At first look, limit theorems, pseudorandomness and learning theory appear to be disparate subjects. However, as it has now become apparent, there's a strong connection between these questions through a third more abstract question: what do random objects look like. This connection is best illustrated by the study of the spectrum of Boolean functions which directly or indirectly played an important role in a plethora of results in complexity theory. The current thesis aims to take this program further by drawing on a variety of fundamental tools, both classical and new, in probability theory and analytic geometry. Our research contributions broadly fall into three categories. Probability Theory: The central limit theorem is one of the most important results in all of probability and richly studied topic. Motivated by questions in pseudorandomness and learning theory we obtain two new limit theorems or invariance principles. The proofs of these new results in probability, of interest on their own, have a computer science flavor and fall under the niche category of techniques from theoretical computer science with applications in pure mathematics. Pseudorandomness: Derandomizing natural complexity classes is a fundamental problem in complexity theory, with several applications outside complexity theory. Our work addresses such derandomization questions for natural and basic geometric concept classes such as halfspaces, polynomial threshold functions (PTFs) and polytopes. We develop a reasonably generic framework for obtaining pseudorandom generators (PRGs) from invariance principles and suitably apply the framework to old and new invariance principles to obtain the best known PRGs for these complexity classes. Learning Theory: Learning theory aims to understand what functions can be learned efficiently from examples. As developed in the seminal work of Linial, Mansour and Nisan (1994) and strengthened by several follow-up works, we now know strong connections between learning a class of functions and how sensitive to noise, as quantified by average sensitivity and noise sensitivity, the functions are. Besides their applications in learning, bounding the average and noise sensitivity has applications in hardness of approximation, voting theory, quantum computing and more. Here we address the question of bounding the sensitivity of polynomial threshold functions and intersections of halfspaces and obtain the best known results for these concept classes.Item Learning with positive and unlabeled examples(2015-12) Natarajan, Nagarajan; Dhillon, Inderjit S.; Grauman, Kristen; Marcotte, Edward; Ravikumar, Pradeep; Tewari, AmbujDeveloping partially supervised models is becoming increasingly relevant in the context of modern machine learning applications, where supervision often comes at a cost. In particular, there are several application domains where the available training data consists only of positive and unlabeled examples (no negative examples). One motivating application in computational biology is that of predicting genes linked to human genetic disorders, where we do *not* have access to ``negative'' gene-disease associations but only a few positive associations. Existing methods for supervised learning (i.e. when the learner has access to both positive and negative examples) do not always work when the training data has examples from only one class. In this thesis, we study various machine learning problems with positive-unlabeled (PU) supervision and develop methods for the corresponding *PU learning* problems. We show that by reducing PU learning to learning with ``one-sided label noise'', one can obtain a family of methods applicable to diverse problems including binary classification, multi-label learning, matrix completion and multiple-instance learning. The benefits of such a reduction are twofold: (1) We can essentially use the algorithms for supervised learning, albeit with appropriate modifications to account for partial supervision; (2) The resulting problem formulations are amenable to analysis, leading to strong theoretical guarantees for the performance of the proposed methods in PU learning tasks. Finally, we consider performance measures widely used in PU learning applications beyond the traditional measures such as classification accuracy, and extend some of the guarantees to general performance measures.Item The meanings behind the screens : a qualitative study of the Screen It! program(2013-08) Gleixner, Alison Marie; Bain, ChristinaThis case study examined the Screen It! Program and focused on how this program benefitted the students. This study focused on students’ perceptions and in order to have a holistic understanding of the phenomenon, it was important to understand the viewpoint of museum educators, teachers, and students. In these types of museum-school partnerships, students’ voices are rarely heard and considered when creating curricula. Therefore, consideration of students’ voices may help museum educators craft these partnership programs in the future. Three themes emerged emphasizing the importance of expectations and program goals, curricular relevancy to student life and community, and meaningful learning outcomes. Along with utilizing relevant learning theories during classroom instruction, by actively responding to the voices and needs of the students in these areas, museum educators can provide more meaningful learning experiences for students.Item Toward a new progressive theory of learning : a critical deconstruction and synthesis of three learning theories(2013-05) Edghill, Elizabeth; Schallert, Diane L.Understanding how students learn, that is, how they recognize, process, and internalize new information, is vital to any teacher’s success. Although many theories exist in this field, I have selected three strong theories to initiate a discussion that I see as suggestive of a new, cohesive theory that represents a synthesis of all three. For the purposes of this report, I have selected the theories of constructivism and social constructivism from Piaget and Vygotsky, Bronfenbrenner’s Ecological Systems theory, and Chaos theory as the basis for my proposed model. In the report, these three theories are deconstructed, and various components of each are then synthesized to suggest a comprehensive model. It is my intent that my proposed model be helpful to teachers in designing and tailoring instruction for their students. By understanding the relationships and inter-relationships of the child to the various systems that affect him/her, the teacher can better engage all students toward a successful outcome.