Neural network structure modeling: an application to font recognition

dc.creatorLee, Ming-chih Yeh
dc.date.accessioned2016-11-14T23:18:28Z
dc.date.available2011-02-18T18:54:24Z
dc.date.available2016-11-14T23:18:28Z
dc.date.issued1988-12
dc.degree.departmentComputer Scienceen_US
dc.description.abstractTwo neural network models, Model H-Hl (Hogg and Huberman, 1984) and Model H-H2 (Hogg and Huberman, 1985) have been successfully applied to the font recognition problem and were used to recognize 26 English capital letters, each with six font representations. Recognition rate, memory space requirement, learning speed, and recognition speed were used to measure the models* performances. Model parameters such as memory array size, Smin_Smax, and Mmin_Mmax were varied to elucidate the models' behavior. As a result, both models achieved a 100% recognition rate when all six fonts were used as the training as well as the recognition set. When three out of six fonts were used for training, Model H-Hl achieved a maximum recognition rate of 87.82% and Model H-H2 achieved a maximum recognition rate of 89.10%. This shows that the basins of attractor states existed for the letters in most of the various font presentations. Model H-H2 significantly outperformed Model H-Hl in terms of recognition rate, use of memory space, and learning speed when all six fonts were used as the training set. This was supported by the results of the Pairwised T Test.
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/2346/8537en_US
dc.language.isoeng
dc.publisherTexas Tech Universityen_US
dc.rights.availabilityUnrestricted.
dc.subjectPattern recognition systemsen_US
dc.subjectNeural computersen_US
dc.subjectComputer visionen_US
dc.subjectArtificial intelligence -- Computer programsen_US
dc.titleNeural network structure modeling: an application to font recognition
dc.typeThesis

Files