Browsing by Subject "Learning models"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item An analysis and theoretical development of mental learning curves(Texas Tech University, 2004-08) Jian, Jiun-YinThis study investigated how human mental learning reaches an asymptotic state at an acceptable performance level. The mental learning model may encompass multiple factors not previously revealed, including threshold learning. Numerous studies have explored the physical and psychological aspects of human learning. Several components of learning have been identified, including time, accuracy, individual differences, and fatigue. Furthermore, previous findings have shown a relationship between time and errors, or time and quantity production as s-shaped curves. However, previous learning curve research concentrated more on skill learning or maze learning. Few studies evaluated detailed human mental learning or strategic information processing. In particular, these studies did not explicitly evaluate mental learning in terms of "threshold overcome," which possibly indicates an initial change in speed of the rate of learning. The study was inductive research comprising designs of one exploratory and two confirmatory experimental sessions (a monochrome puzzle and a typical-image puzzle) with repeated within-subject easures. Computer-based jigsaw puzzles were employed to explore constructs of mental learning, including the rate of learning, strategic utilization, and errors. Mathematical mental learning models and strategy information processing also were examined. A total of 125 participants were recruited. Participants with color vision deficiency were dismissed. Learning phenomenon was ascertained over the three experiments and completion times in puzzle-solving tasks were considerably reduced after four repetitions. Threshold learning did not exist in this study while learning models demonstrated a power function. The feature of puzzle edges and the memorization function were the two most adopted approaches. No significant error eduction was found in the two confirmatory sessions.Item Continuous state Q-learning(Texas Tech University, 1999-05) Alcorn, Cristy MicheleQ-learning is a solution technique developed to solve classical Markov Decision Processes, MDPs. Markov Decision Processes are models for sequential decision making problems and address many classical control problems. In Chapter I, this paper discusses the model and some standard solution techniques used in Markov Decision Processes and its limitations [6]. Q-learning was developed by Watkins to broaden the scope of problems that dynamic programming, MDP techniques, can solve. Classical Q-learning is a model free solution technique and is therefore able to address a variety of poorly modeled decision problems which were unsolvable using standard MDP techniques. Watkins development of Q-learning is based on Markov Decision Processes with discrete action and state spaces. The model and algorithm associated with classical Q-learning are described in Chapter II. To extend the set of problems which can be addressed using Q-learning, Chapter III addresses solution techniques for poorly modeled problems with continuous state and/or action spaces. The model is slightly altered and the algorithm is adjusted to account for the continuous state and action spaces. Numerical example show that continuous Q-learning does determine the optimal policy over time. Ongoing research is being carried on to improve both the current classical Q-learning method and to prove the convergence in the continuous case.