Browsing by Subject "neural networks"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item A flexible control system for flexible manufacturing systems(Texas A&M University, 2004-09-30) Scott, Wesley DaneA flexible workcell controller has been developed using a three level control hierarchy (workcell, workstation, equipment). The cell controller is automatically generated from a model input by the user. The model consists of three sets of graphs. One set of graphs describes the process plans of the parts produced by the manufacturing system, one set describes movements into, out of and within workstations, and the third set describes movements of parts/transporters between workstations. The controller uses an event driven Petri net to maintain state information and to communicate with lower level controllers. The control logic is contained in an artificial neural network. The Petri net state information is used as the input to the neural net and messages that are Petri net events are output from the neural net. A genetic algorithm was used to search over alternative operation choices to find a "good" solution. The system was fully implemented and several test cases are described.Item Exploiting data parallelism in artificial neural networks with Haskell(2009-08) Heartsfield, Gregory Lynn; Ghosh, Joydeep; Julien, ChristineFunctional parallel programming techniques for feed-forward artificial neural networks trained using backpropagation learning are analyzed. In particular, the Data Parallel Haskell extension to the Glasgow Haskell Compiler is considered as a tool for achieving data parallelism. We find much potential and elegance in this method, and determine that a sufficiently large workload is critical in achieving real gains. Several additional features are recommended to increase usability and improve results on small datasets.Item Fuzzy neural network pattern recognition algorithm for classification of the events in power system networks(Texas A&M University, 2004-09-30) Vasilic, SlavkoThis dissertation introduces advanced artificial intelligence based algorithm for detecting and classifying faults on the power system transmission line. The proposed algorithm is aimed at substituting classical relays susceptible to possible performance deterioration during variable power system operating and fault conditions. The new concept relies on a principle of pattern recognition and detects the existence of the fault, identifies fault type, and estimates the transmission line faulted section. The approach utilizes self-organized, Adaptive Resonance Theory (ART) neural network, combined with fuzzy decision rule for interpretation of neural network outputs. Neural network learns the mapping between inputs and desired outputs through processing a set of example cases. Training of the neural network is based on the combined use of unsupervised and supervised learning methods. During training, a set of input events is transformed into a set of prototypes of typical input events. During application, real events are classified based on the interpretation of their matching to the prototypes through fuzzy decision rule. This study introduces several enhancements to the original version of the ART algorithm: suitable preprocessing of neural network inputs, improvement in the concept of supervised learning, fuzzyfication of neural network outputs, and utilization of on-line learning. A selected model of an actual power network is used to simulate extensive sets of scenarios covering a variety of power system operating conditions as well as fault and disturbance events. Simulation results show improved recognition capabilities compared to a previous version of ART neural network algorithm, Multilayer Perceptron (MLP) neural network algorithm, and impedance based distance relay. Simulation results also show exceptional robustness of the novel ART algorithm for all operating conditions and events studied, as well as superior classification capabilities compared to the other solutions. Consequently, it is demonstrated that the proposed ART solution may be used for accurate, high-speed distinction among faulted and unfaulted events, and estimation of fault type and fault section.Item Multi-step-ahead prediction of MPEG-coded video source traffic using empirical modeling techniques(Texas A&M University, 2006-04-12) Gupta, DeepankerIn the near future, multimedia will form the majority of Internet traffic and the most popular standard used to transport and view video is MPEG. The MPEG media content data is in the form of a time-series representing frame/VOP sizes. This time-series is extremely noisy and analysis shows that it has very long-range time dependency making it even harder to predict than any typical time-series. This work is an effort to develop multi-step-ahead predictors for the moving averages of frame/VOP sizes in MPEG-coded video streams. In this work, both linear and non-linear system identification tools are used to solve the prediction problem, and their performance is compared. Linear modeling is done using Auto-Regressive Exogenous (ARX) models and for non linear modeling, Artificial Neural Networks (ANN) are employed. The different ANN architectures used in this work are Feed-forward Multi-Layer Perceptron (FMLP) and Recurrent Multi-Layer Perceptron (RMLP). Recent researches by Adas (October 1998), Yoo (March 2002) and Bhattacharya et al. (August 2003) have shown that the multi-step-ahead prediction of individual frames is very inaccurate. Therefore, for this work, we predict the moving average of the frame/VOP sizes instead of individual frame/VOPs. Several multi-step-ahead predictors are developed using the aforementioned linear and non-linear tools for two/four/six/ten-step-ahead predictions of the moving average of the frame/VOP size time-series of MPEG coded video source traffic. The capability to predict future frame/VOP sizes and hence the bit rates will enable more effective bandwidth allocation mechanism, assisting in the development of advanced source control schemes needed to control multimedia traffic over wide area networks, such as the Internet.Item Pseudokarst topography in a humid environment caused by contaminant-induced colloidal dispersion(Texas A&M University, 2004-09-30) Sassen, Douglas SpencerOver fifty small sinkholes (~1 meter in depth and width) were found in conjunction with structural damage to homes in an area south of Cleveland, TX. The local geology lacks carbonate and evaporite deposits associated with normal sinkhole development through dissolution. The morphology and distribution of sinkholes, and the geologic setting of the site are consistent with piping erosion. However, the site lacked the significant hydraulic gradient or exit points for sediment associated with traditional piping erosion. In areas of sinkholes, geophysical measurements of apparent electrical conductivity delineated anomalously high conductivity levels that are interpreted as a brine release from a nearby oil-field waste injection well. The contaminated areas have sodium adsorption ratios (SAR) as high as 19, compared to background levels of 3. Sodium has been shown to cause dispersion of soil colloids, allowing for sediment transport at very low velocities. Thus, subsurface erosion of dispersed sediment could be possible without significant hydraulic gradients. This hypothesis is backed by the observation of the depletion of colloidal particles within the E-horizon of sinkholes. However, there is a lack of precedence of waste brines initiating colloid dispersion. Also, sodium dispersion is not thought to be an important process in piping erosion in humid settings such as this one. Therefore, laboratory experiments on samples from the site area, designed to simulate field conditions, were conducted to measure dispersion verses pH, SAR and electrical conductivity (EC). Analysis of the experimental data with neural networks showed that an increase in SAR did increase dispersion. A dispersion prediction map, constructed with the trained neural network and calibrated geophysical data, showed correlation between sinkhole locations and increased predicted dispersion. This research indicates that a contaminant high in sodium content has caused colloidal dispersion, which may have allowed nontraditional subsurface erosion to occur in an area lacking a significant hydraulic gradient.Item Wavelets, Self-organizing Maps and Artificial Neural Nets for Predicting Energy Use and Estimating Uncertainties in Energy Savings in Commercial Buildings(2010-01-14) Lei, YafengThis dissertation develops a "neighborhood" based neural network model utilizing wavelet analysis and Self-organizing Map (SOM) to predict building baseline energy use. Wavelet analysis was used for feature extraction of the daily weather profiles. The resulting few significant wavelet coefficients represent not only average but also variation of the weather components. A SOM is used for clustering and projecting high-dimensional data into usually a one or two dimensional map to reveal the data structure which is not clear by visual inspection. In this study, neighborhoods that contain days with similar meteorological conditions are classified by a SOM using significant wavelet coefficients; a baseline model is then developed for each neighborhood. In each neighborhood, modeling is more robust without unnecessary compromises that occur in global predictor regression models. This method was applied to the Energy Predictor Shootout II dataset and compared with the winning entries for hourly energy use predictions. A comparison between the "neighborhood" based linear regression model and the change-point model for daily energy use prediction was also performed. We also studied the application of the non-parametric nearest neighborhood points approach in determining the uncertainty of energy use prediction. The uncertainty from "local" system behavior rather than from global statistical indices such as root mean square error and other measures is shown to be more realistic and credible than the statistical approaches currently used. In general, a baseline model developed by local system behavior is more reliable than a global baseline model. The "neighborhood" based neural network model was found to predict building baseline energy use more accurately and achieve more reliable estimation of energy savings as well as the associated uncertainties in energy savings from building retrofits.