Browsing by Subject "Neural computers"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item A speech processing application for the Huberman-Hogg neural network model(Texas Tech University, 1988-12) Taylor, Valerie JA typical word recognition system requires that several major tasks be performed; necessary components include (1) a preprocessor to extract the significant information from the speech time waveform, (2) a section which stores the training set of word models or templates and then compares an unknown input pattern with the training set, and (3) decision logic to determine the best matching word. This thesis reports on experiments that explore isolated word recognition with an artificial neural network based on the Huberman- Hogg (H-H) model. The results presented in this manuscript were developed from computer simulations of the speech recognition system, but an electro-optical H-H system is also proposed and described. The principal goal of the experimental work is to test the suitability of the ambiguity function representation in preprocessing speech data. Employing the ambiguity function for the speech signal representation was expected to provide two advantages: the input patterns to the H-H network should become less sensitive to time shifts of the total speech waveform, perhaps even making time alignment of the words unnecessary; the ambiguity function of a signal can be obtained in real time with a coherent optical processor, as shown by Marks, Walkup, and Krile (1977), to provide two-dimensional input to an electro-optical H-H network. Since studies indicate that the H-H neural network effectively processes a variety of input functions, this network was chosen as a classifier/recognizer for the ambiguity function patterns representing speech data. Ambiguity functions for isolated words are generated from digitized voice recordings and then submitted to the H-H network for training and recognition testing. Which pattern of the training set best matches the unknown pattern is a decision clearly dependent on the distance metric employed and these experiments explore use of several similarity measures. Following an introductory discussion including an overview of speech processing, the radar ambiguity function, and the Huberman-Hogg neural network model, is a description of the experimental arrangement. Both the components of tiie system and the simulation software are treated. The next section gives the particulars of the various experimental conditions and results. It was found that the ambiguity function performed as desired, acting as a representation that allows the system to become less shift sensitive; however, the neural network processing, at least with the parameter set and decision logic employed, did not yield any increase in the recognition capabilities of the system. Several potential problem areas are identified and suggestions are made for future studies.Item Neural network application in image restoration(Texas Tech University, 1993-12) Wang, BinNot availableItem Neural network structure modeling: an application to font recognition(Texas Tech University, 1988-12) Lee, Ming-chih YehTwo neural network models, Model H-Hl (Hogg and Huberman, 1984) and Model H-H2 (Hogg and Huberman, 1985) have been successfully applied to the font recognition problem and were used to recognize 26 English capital letters, each with six font representations. Recognition rate, memory space requirement, learning speed, and recognition speed were used to measure the models* performances. Model parameters such as memory array size, Smin_Smax, and Mmin_Mmax were varied to elucidate the models' behavior. As a result, both models achieved a 100% recognition rate when all six fonts were used as the training as well as the recognition set. When three out of six fonts were used for training, Model H-Hl achieved a maximum recognition rate of 87.82% and Model H-H2 achieved a maximum recognition rate of 89.10%. This shows that the basins of attractor states existed for the letters in most of the various font presentations. Model H-H2 significantly outperformed Model H-Hl in terms of recognition rate, use of memory space, and learning speed when all six fonts were used as the training set. This was supported by the results of the Pairwised T Test.Item Noise effects and fault tolerance in Hopfield-type neural networks(Texas Tech University, 1990-05) Jong, Tai-LangResearch interest in neural networks has grown rapidly in recent years. Studies covering many aspects of neural networks, from new models, simulations and theoretical analyses, to implementations and applications, have been reported. Little research, however, has been performed on noise effects and fault tolerance in neural networks. In this dissertation, the focus is on the investigation of Hopfield-type neural networks (HNNs) by both numerical simulations and theoretical analyses. Computer simulations and a linear combination concept are employed to study HNNs from a quantitative point of view. A statistical method and models are then proposed for various situations in analyzing different aspects of HNNs. Contributions of this dissertation include the following: First, a complementary Hopfield model (CHNN) is presented for improving the performance of the original Hopfield model. A generalized three-layered model capable of systematically describing HNNs and their extensions, e.g., higher-order, exponential order, and winner-take-all nets is then proposed. A rigorous analysis of HNNs, using a statistical technique, clearly displays their characteristics. Analyses and comparisons of first-order modifications, higher-order nets, and exponential order nets are also performed. The differences between even- and odd-order nets, as well as auto- and hetero-associative memories are pointed out and discussed. Various models for implementation error and/or noise sources, including detector/thresholding device noise, 2-D matrix mask noise, and gain variation noise, are then proposed and an "excess" noise concept is developed, which then leads to new results on the analysis of implementation noise effects in HNNs. This technique is then extended and successfully applied to the analysis of fault tolerance problems, including synaptic interconnect faults and neuron stuck-at faults, in HNNs.Item Optical implementations of the alternating projection neural network(Texas Tech University, 1989-05) Smith, Alan TrowbridgeAn analysis of the alternating projection neural network (APNN) along with results from an electronic and two optical implementations are presented in this thesis. After an introduction the APNN is explored both analytically and geometrically. Characteristics such as the speed of convergence are realized through matrix equations and linear algebra. If convergence of the APNN is visualized as a geometrical process based on projection, complementary principles are established, such as how the convergence rate changes as new data is stored in the network. Finally, experimental results from two optical architectures are discussed; both configurations use optical matrix-vector multipliers with feedback, but different spatial light modulators are used.Item Quantitative analyses of associative memories(Texas Tech University, 1992-05) Huang, Yo-pingNeural networks have been studied for many years in the hope of simulating human-like activities such as recognizing a friend in a picture. Associative memories are systems that can recall stored data by specifying all or a portion of a probe that has been associated or paired with that data. Until now, most of the researchers used the equal probability neuron status assumption to derive the system performance. Only a few considered the non-equally distributed case. In this dissertation, we have quantitatively analyzed the characteristics of a variety of sparsely encoded associative memories. Based on each neuron operating close to its threshold, a dynamic thresholding scheme is proposed. From this dynamic approach, the first-order sparsely encoded associative memory storage capacity is shown to have better performance than for an ordinary associative memory. Sensitivity of storage capacity with respect to variations of threshold change is calculated to observe the effect on capacity change. Information capacity is also investigated in order to choose the optimum activity rate. Extensions are made to the higher order system. Several properties such as storage and information capacities are explored to evaluate the system performance. Other contributions include: (1) consideration of the retrieval problem of stored patterns in the noisy environment, and (2) development of the fault tolerant analysis of associative memories. Both neuron and connection fault models are analyzed in detail. Simulation results are shown to be consistent with theoretical work.