Browsing by Subject "Time-series analysis"
Now showing 1 - 14 of 14
Results Per Page
Sort Options
Item An investigation of the time series properties of sales and operating expenses of selected firms(Texas Tech University, 1978-08) De Moville, Wiggins BevilleNot availableItem Bispectral density computation and its application to time series analysis(Texas Tech University, 1995-08) Dey, Aswini KumarPrediction and simulation are the main purposes of the time series data analysis. In order to obtain good results, one needs to fitting of an appropriate model for the given time series data. Early time series model fittings were concerned with the fitting of linear type models, namely, ARMA models. But in many cases, time series come from some non-linear process. Consequently, linear models fail to produce satisfactory results. In recent years, neural network approaches are used satisfactorily to deal with almost all type of time series. Spectral density also plays an important role in time series analysis. Particularly, when the data emerges from a linear and Gaussian process, it contains all the necessary and useful information about the series. However, in order to deal with non-linear and non-Gaussian processes, we need to consider higher order spectra. The simplest higher order spectra is bispectra. In this thesis, we have computed bispectral density and have used it in testing the linearity and Gaussianity properties of the time series for ECG and WIND data. Based on the outcome of this test appropriate models are fitted for the data sets considered. The fitted model is then used for one point prediction and simulation of the original series. It is found that Multistep prediction collapses within a few steps. This is not the case in neural network approach.Item ECG time series prediction with neural networks(Texas Tech University, 1995-08) Christiansen, Brian ThomasThe comparison of three neural network methods for the prediction of a time series is studied. The digitization of electrocardiograph recordings gathered from a group of patients by the Massachusetts Institute of Technology Division of Health Sciences and Technology serve as the base for the time series to be predicted. The feed-forward back propagation learning algorithm, radial basis functions with orthogonal least squares learning algorithm and recurrent networks with Pearlmutter's learning algorithm are used as the three neural networks for prediction. The three methods prove successful in single point prediction and give fairly good results for as much as 5-point prediction, but beyond that the results are poor. The five points predicted represent less than one-quarter of a second of electrocardiograph recording time; thus showing all three methods unsuccessful as long term predictors.Item Genetic algorithms with functional mutation and mating operators in time series data mining(Texas Tech University, 2004-08) Huang, JianyongRecently, genetic algorithms (GAs) and artificial neural networks (ANNs) have been widely used in time series data mining (TSDM). Both GAs and ANNs are inspired from natural processes. A GA can be used to find optimized parameters for a given model, while an ANN has the ability to approximate unknown functions to any degree of desired accuracy without knowing the model. There are some limitations of using GAs or ANNs individually in TSDM. For example, ANNs generally use backpropagation learning algorithms, which are based on the deepest descent algorithm. Therefore, a solution from the .A.NN usually is a local optimized solution. The purpose of this thesis work is to develop innovative algorithms which can overcome the limitation of using GAs or ANNs solely in TSDM. The first part of this research involves designing a new genetic algorithm (called mGA), which can analyze not only polynomial but also non-polynomial time series. The mGA automatically searches a polynomial function with minimal degree for a non-polynomial time series. The rest of this research focuses on developing a neural network based genetic algorithm (called nGANN). The nGANN represents a chromosome as a neural network and uses genetic operators to select a global solution for a lime series. The nGANN introduces a new mating scheme (called NN _ mate), which uses a backpropagation learning network to produce offsprings. Therefore, NN mate can mate two parents with different models. The solution found by the nGANN has two attractive features: a network with small number of hidden neurons and a small mean squared error. From the solution network, h is possible to discover some relationships among different variables. Three different types of lime series data are used to evaluate the performance of the above algorithms, the two algorithms work well for one-variable polynomial and one-variable non-polynomial time series data. For two or more variables, the above algorithms do not produce very good results. In the last part of this thesis, future work is discussed.Item Lagrangian multiplier in the Pearlmutter algorithm and dynamic neural networks(Texas Tech University, 1998-05) Sun, JuanNot availableItem Mapping the team decision theory problem to Hopfield-like neural networks(Texas Tech University, 1993-12) Rao, GiridharTeam Decision Theory is a statistical discipline that has several applicationin areas such as decentralized control and distributed computing. In the middle to late 1970 s. this area was studied quite extensively. However, there were several limitations to the scope of the study due to the inherent mathematical intractability of the problem. There were severe restrictions on the nature of the system inputs and their probability distribution functions. Unless the underlying probability density functions of the system parameters were Gaussian, it was not possible to derive analytical solutions to the problems. In recent years, neural networks have become increasingly popular as a means to solve large optimization problems. The high interconnectivity and the nature of neuron layouts and interactions have led to success in mapping large optimization problems to neural networks. In particular the Hopfield-Tank Network and some derivations thereof have been successful for these problems. Neural networks are not sensitive to the underlying probability distributions of the systems they are trying to solve. With the advent of cheaper hardware and faster networks, distributed processing in a networked system has become increasingly popular. One of the key areas of study in distributed computing is the load balancing discipline: determining an optimal balancing of tasks or jobs among the various nodes in the system to maximize system performance and throughput. Several schemes have been studied with varying degrees of success.Item Models and algorithms for statistical timing and power analysis of digital integrated circuits(2007-05) Wang, Wei-Shen, 1976-; Orshansky, MichaelThe increased variability of process and environmental parameters is having a significant impact on timing and power performance metrics of digital integrated circuits. Traditionally formulated deterministic timing and power analysis algorithms based on worst-case values of parameters often lead to over-pessimistic predictions, and may miss actual worst-case performance corners. As a result, there is an increasing need for statistical algorithms that can take into account the probabilistic nature of parameters. The practical applications of statistical approaches, however, are restricted by the limited availability of parameter distributions, and the idealized modeling of parameters adopted in the statistical frameworks. In some cases, only partial probabilistic descriptions of parameters are available, such as the mean and variance. Thus, designers are in an urgent need for statistical approaches that can handle partially-specified uncertainty. The objective of this dissertation is to provide robust and accurate timing and power estimates for designers to assess the impact of variability on circuit performance. This dissertation proposes a set of statistical analysis algorithms to estimate circuit timing and leakage power dissipation based on robust probabilistic approaches and rigorous mathematical modeling of parameter uncertainty. Full and partial probabilistic descriptions of parameters can be incorporated into the developed statistical frameworks. Specifically, the proposed approaches include: 1) a path-based statistical timing analysis algorithm handling path delay correlations; 2) a statistical timing analysis algorithm based on partial probabilistic descriptions of parameters; 3) analytical techniques for assessing the impact of threshold voltage variation on leakage power of dual-threshold voltage designs, and selecting optimal values of the threshold voltages for leakage power reduction; and 4) a robust estimation algorithm for parametric yield and leakage dissipation based on realistic descriptions of parameter uncertainty. The developed algorithms along with the new modeling strategy effectively improve the overconservatism of the corner-based deterministic algorithms, and also permit assessing the impact of variability on circuit performance in the early design phase, which facilitates fast power and timing verifications in the design process. As the magnitude of variability continues to increase, the developed statistical algorithms and modeling strategy will become increasingly important for the future technology generations.Item Models and algorithms for statistical timing and power analysis of digital integrated circuits(2007) Wang, Wei-Shen; Orshansky, MichaelItem Persistence, sudden changes, and modeling volatility of financial time series(Texas Tech University, 2004-12) Covarrubias, GuillermoNot availableItem System self-assessment of survival in time series modeling(Texas Tech University, 1998-05) Lu, HuitianThe concept, theoretical argument, and practical implementation of system self-assessment of survival using time series modeling is defined , investigated, and developed. System self-assessment of survival predicts conditional reliability for a future period of time or usage, to support an operational mission in real-time. As implemented, performance measures are monitored and modeled in physical terms, then associated models are developed in probability/statistical terms. The key issues in system self-assessment of survival are physical performance measurement and related modeling, forecasting, and survival estimation. The research develops theoretical connections between physical performance assessment and existing time series modeling, yielding a self-assessment of survival model, based on the concept of performance reliability. Different methods, including Autoregressive Integrated Moving Average (ARIMA), exponential smoothing, and realtime recurrent neural networks, are assessed regarding modeling and prediction capabilities in real-time. In order to meet the real-time requirements of self-assessment of survival, model "self-generation" is emphasized in the context of on-line performance observation. For demonstration and validation, the research work develops the framework of a deliverable software package, Real-Time System Self-Assessment of Survival (RTSAS), which performs real-time data acquisition and survival selfassessment. The research describes methods useful for system self-assessment of survival based on physical system performance measures and time series modeling in both single failure mode and multiple, independent, failure modes. Results produced in linear trend exponential smoothing show promise for field real-time applications, provided resolution of physical signals can be obtained and the failure mode is properly defined in terms of physical performance.Item Theory and application of time-frequency analysis to transient phenomena in electric power and other physical systems(2004) Shin, Yong June; Powers, Edward J.; Grady, W. M.Item Time-series analysis using orthogonal polynomials(Texas Tech University, 2003-05) Vittal, Vinay AchalanandAdvances in the study of non-linear dynamics have encouraged the construction of models and simulators of non-linear time-series. Researchers in the field of both science and statistics have come up with innovative methods that are useful in extracting information from systems that exhibit non-linear dynamics. Time-series, as we all know, is the sequence x1, x2, x3,…x11 observed in time. Time-series analysis depends on the fact that data points taken over time may have internal structure such as autocorrelation, trend or seasonal variation. It is these properties that make model construction possible. As part of this research, the Measure Based approach to reconstruction, proposed by Giona [1], is investigated. This method is based on the Fourier expansion of the polynomial system 11 orthonormal to the invariant measures. Programs have been written based on the MB approach and these programs were tested on various one dimensional time-series like the sine map, the tent map and the logistic map. This approach to reconstruction furnishes good results when applied to chaotic one dimensional time-series.Item Two-dimensional linear discrete systems: a polynomial fractional approach(Texas Tech University, 1988-08) Gapinski, Andrzej JThe purpose of this dissertation is two-fold. First, the class of two-dimensional linear time-invariant discrete system is investigated and a unified approach is proposed for its representation. This approach based on the two-dimensional polynomial fractional representation is further extended to two-dimensional, linear, time-varying discrete systems. This algebraic framework is established with use of the division process in K[z1,z2] which is defined and investigated. Also, a ring of generalized two-dimensional polynomials K{z1-,z2} with the division property is defined. The main structure of the proposed realization and control theory is based on a module of signals over a two-dimensional polynomial ring and a skew polynomial ring. The 2-D Kalman input-output map is defined and realization based on its factorization is considered. Also, various models for 2-D systems are considered for the time-invariant case. Secondly, system-theoretic properties such as reachability and observability are explored. The stability problem is considered. In a sequel the polynomial equation Q X + R Y = ö is explored. The conditions are specified for control of two-dimensional systems.Item Wavelet-based acoustic emission analysis of composite materials(Texas Tech University, 1996-08) Qi, GangIn this dissertation, a methodology for time-frequency analysis of acoustic emission (AE) signals generated due to static loading of composite specimen is presented. The tool is based on a recently developed mathematical transform called the wavelet transform. Two aspects of AE-based nondestructive evaluation (NDE) are failure mode identification and residual strength prediction. In this work, the wavelet-based AE method is applied to these two aspects of AE-based NDE. Presently, the public literature review indicates that AE techniques are dominated by time domain analysis methods. It can be seen that these methods have matured into tools which provide satisfactory results. There are limited results available that use frequency domain techniques, however, there is valuable information available in the frequency domain. Thus, it is evident that there is a need for an AE analysis technique that simultaneously utilizes both the time and frequency domains. In this dissertation, a hybrid technique is developed. With the application of wavelet transforms to the failure mode identification, the AE signals are decomposed into different wavelet levels. A general trend is observed by investigating the energy-frequency distribution of the decomposed AE signals. This trend indicates that the energy in the AE signals is essentially concentrated in three levels (seven, eight, and nine), representing frequency rages of 50-150 kHz, 150-250 kHz, and 250-310 kHz. Furthermore, the energy percentages in levels seven, eight, and nine are determined to be 8%, 15%, and 75%, respectively. The analysis indicates that the three dominant wavelet levels may be related to different failure modes associated with the fracture of CFR composites. In the prediction of residual strength, the ability of the wavelet transform to enhance the signal to noise ratio is employed. The exponential constant in value used to determine the relationship between stress and stress intensity factor are compared relative to classical fracture mechanics and AE techniques. In the comparison study, the conventional and wavelet-based AE techniques are presented side-by-side to show the advantage of wavelet-based methods. The results verify that the wavelet-based method improves on the results relative to classical fracture mechanics methods.