Browsing by Subject "Parameter estimation"
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item Adaptive measure-theoretic parameter estimation for coastal ocean modeling(2015-08) Graham, Lindley Christin; Dawson, Clinton N.; Butler, Troy; Gamba, Irene; Ghattas, Omar; Moser, RobertSince Hurricane Katrina (2005), there has been a marked increase in the quantity of field observations gathered during and after hurricanes. There has also been an increased effort to improve our ability to model hurricanes and other coastal ocean phenomena. The majority of death and destruction due to a hurricane is from storm surge. The primary controlling factor in storm surge is the balance between the surface stress due to the wind and bottom stress. Manning's formula can be used to model the bottom stress; the formula includes the Manning's n coefficient which accounts for momentum loss due to bottom roughness and is a spatially dependent field. It is impractical to measure Manning's n over large physical domains. Instead, given a computational storm surge model and a set of model observations, one may formulate and solve an inverse problem to determine probable Manning's n fields using observational data, which in turn can be used for predictive simulations. On land, Manning's n may be inferred from land cover classification maps. We leverage existing land cover classification data to determine the spatial distribution of land cover classifications which we consider certain. These classifications can be used to obtain a parameterized mesoscale representation of the Manning's n field. We seek to estimate the Manning's n coefficients for this parameterized field. The inverse problem we solve is formulated using a measure-theoretic approach; using the ADvanced CIRCulation model for coastal and estuarine waters as the forward model of storm surge. The uncertainty in observational data is described as a probability measure on the data space. The solution to the inverse problem is a non-parametric probability measure on the parameter space. The goal is to use this solution in order to measure the probability of arbitrary events in the parameter space. In the cases studied here the dimension of the data space is smaller than the dimension of the parameter space. Thus, the inverse of a fixed datum is generally a set of values in parameter space. The advantage of using the measure-theoretic approach is that it preserves the geometric relation between the data space and the parameter space within the probability measure. Solving an inverse problem often involves the exploration of a high-dimensional parameter space requiring numerous expensive forward model solves. We use adaptive algorithms for solving the stochastic inverse problem to reduce error in the estimated probability of implicitly defined parameter events while minimizing the number of forward model solves.Item Coordinated measurements and estimation using quantized particle swarm optimization(2012-05) Deshpande, Sagar; Hui, Qing; Smith, Philip; Berg, Jordan M.The field of optimization is very vast and is growing by the hour, every day we use hundreds of optimization algorithms in various fields of sciences to solve and optimize problems with large variances in both degree of complexity and time requirements, and there is a constant need for better and faster optimization algorithms. This thesis presents one such optimization algorithm, named as Quantized Particle Swarm Optimization (QPSO) algorithm, which is an improvement of standard Particle Swarm Optimization (PSO) algorithm and retains most of its properties as PSO has been empirically shown to have a very good performance. In this new algorithm the particles can additionally communicate with each other with via a new variable called “Quantizer” which exploits the group effect to improve algorithm performance. Simulation results suggesting advantages of QPSO algorithm over PSO are presented for eight benchmark functions used for testing, Moreover the convergence analysis for deterministic version of QPSO is provided, and an application of this algorithm to solve threat surface parameter estimation problem is also presented. Initial work on a heuristic approach to localize and estimate the parameters of unknown threat sources by coordinated measurements taken by kinematically constrained sensing robots is described. This approach is tested with simulations for one source and two sources case and results are presented. Our goal was to most efficiently recover true parameters of the threat sources within the fastest time.Item Data assimilation for parameter estimation in coastal ocean hydrodynamics modeling(2013-12) Mayo, Talea Lashea; Dawson, Clinton N.Coastal ocean models are used for a vast array of applications. These applications include modeling tidal and coastal flows, waves, and extreme events, such as tsunamis and hurricane storm surges. Tidal and coastal flows are the primary application of this work as they play a critical role in many practical research areas such as contaminant transport, navigation through intracoastal waterways, development of coastal structures (e.g. bridges, docks, and breakwaters), commercial fishing, and planning and execution of military operations in marine environments, in addition to recreational aquatic activities. Coastal ocean models are used to determine tidal amplitudes, time intervals between low and high tide, and the extent of the ebb and flow of tidal waters, often at specific locations of interest. However, modeling tidal flows can be quite complex, as factors such as the configuration of the coastline, water depth, ocean floor topography, and hydrographic and meteorological impacts can have significant effects and must all be considered. Water levels and currents in the coastal ocean can be modeled by solv- ing the shallow water equations. The shallow water equations contain many parameters, and the accurate estimation of both tides and storm surge is dependent on the accuracy of their specification. Of particular importance are the parameters used to define the bottom stress in the domain of interest [50]. These parameters are often heterogeneous across the seabed of the domain. Their values cannot be measured directly and relevant data can be expensive and difficult to obtain. The parameter values must often be inferred and the estimates are often inaccurate, or contain a high degree of uncertainty [28]. In addition, as is the case with many numerical models, coastal ocean models have various other sources of uncertainty, including the approximate physics, numerical discretization, and uncertain boundary and initial conditions. Quantifying and reducing these uncertainties is critical to providing more reliable and robust storm surge predictions. It is also important to reduce the resulting error in the forecast of the model state as much as possible. The accuracy of coastal ocean models can be improved using data assimilation methods. In general, statistical data assimilation methods are used to estimate the state of a model given both the original model output and observed data. A major advantage of statistical data assimilation methods is that they can often be implemented non-intrusively, making them relatively straightforward to implement. They also provide estimates of the uncertainty in the predicted model state. Unfortunately, with the exception of the estimation of initial conditions, they do not contribute to the information contained in the model. The model error that results from uncertain parameters is reduced, but information about the parameters in particular remains unknown. Thus, the other commonly used approach to reducing model error is parameter estimation. Historically, model parameters such as the bottom stress terms have been estimated using variational methods. Variational methods formulate a cost functional that penalizes the difference between the modeled and observed state, and then minimize this functional over the unknown parameters. Though variational methods are an effective approach to solving inverse problems, they can be computationally intensive and difficult to code as they generally require the development of an adjoint model. They also are not formulated to estimate parameters in real time, e.g. as a hurricane approaches landfall. The goal of this research is to estimate parameters defining the bottom stress terms using statistical data assimilation methods. In this work, we use a novel approach to estimate the bottom stress terms in the shallow water equations, which we solve numerically using the Advanced Circulation (ADCIRC) model. In this model, a modified form of the 2-D shallow water equations is discretized in space by a continuous Galerkin finite element method, and in time by finite differencing. We use the Manning’s n formulation to represent the bottom stress terms in the model, and estimate various fields of Manning’s n coefficients by assimilating synthetic water elevation data using a square root Kalman filter. We estimate three types of fields defined on both an idealized inlet and a more realistic spatial domain. For the first field, a Manning’s n coefficient is given a constant value over the entire domain. For the second, we let the Manning’s n coefficient take two distinct values, letting one define the bottom stress in the deeper water of the domain and the other define the bottom stress in the shallower region. And finally, because bottom stress terms are generally spatially varying parameters, we consider the third field as a realization of a stochastic process. We represent a realization of the process using a Karhunen-Lo`ve expansion, and then seek to estimate the coefficients of the expansion. We perform several observation system simulation experiments, and find that we are able to accurately estimate the bottom stress terms in most of our test cases. Additionally, we are able to improve forecasts of the model state in every instance. The results of this study show that statistical data assimilation is a promising approach to parameter estimation.Item Fault detection and model-based diagnostics in nonlinear dynamic systems(2010-12) Nakhaeinejad, Mohsen; Bryant, Michael D.; Driga, Mircea D.; Fahrenthold, Eric P.; Fernandez, Benito; Longoria, Raul G.Modeling, fault assessment, and diagnostics of rolling element bearings and induction motors were studied. Dynamic model of rolling element bearings with faults were developed using vector bond graphs. The model incorporates gyroscopic and centrifugal effects, contact deflections and forces, contact slip and separations, and localized faults. Dents and pits on inner race, outer race and balls were modeled through surface profile changes. Experiments with healthy and faulty bearings validated the model. Bearing load zones under various radial loads and clearances were simulated. The model was used to study dynamics of faulty bearings. Effects of type, size and shape of faults on the vibration response and on dynamics of contacts in presence of localized faults were studied. A signal processing algorithm, called feature plot, based on variable window averaging and time feature extraction was proposed for diagnostics of rolling element bearings. Conducting experiments, faults such as dents, pits, and rough surfaces on inner race, balls, and outer race were detected and isolated using the feature plot technique. Time features such as shape factor, skewness, Kurtosis, peak value, crest factor, impulse factor and mean absolute deviation were used in feature plots. Performance of feature plots in bearing fault detection when finite numbers of samples are available was shown. Results suggest that the feature plot technique can detect and isolate localized faults and rough surface defects in rolling element bearings. The proposed diagnostic algorithm has the potential for other applications such as gearbox. A model-based diagnostic framework consisting of modeling, non-linear observability analysis, and parameter tuning was developed for three-phase induction motors. A bond graph model was developed and verified with experiments. Nonlinear observability based on Lie derivatives identified the most observable configuration of sensors and parameters. Continuous-discrete Extended Kalman Filter (EKF) technique was used for parameter tuning to detect stator and rotor faults, bearing friction, and mechanical loads from currents and speed signals. A dynamic process noise technique based on the validation index was implemented for EKF. Complex step Jacobian technique improved computational performance of EKF and observability analysis. Results suggest that motor faults, bearing rotational friction, and mechanical load of induction motors can be detected using model-based diagnostics as long as the configuration of sensors and parameters is observable.Item Feasibility of isotropic inversion in orthorhombic media : the Barrett unconventional model(2016-05) Yanke, Andrew James; Spikes, Kyle; Sen, Mrinal K; Fomel, Sergey BGeophysicists often relegate shale reservoirs as having higher symmetries (e.g., transversely isotropic (TI) or isotropic) than what reality demonstrates. Routine application of TI (or even isotropic) algorithms to orthorhombic media neglects the associated errors because we never know the true model in practice. This thesis evaluates the viability of isotropic post-stack and pre-stack seismic inversion to orthorhombic media using the SEAM Barrett Unconventional Model, the most realistic depositional model to date. The Barrett Model contains buried topography, simulated stratigraphy, and designated reservoir zones with orthorhombic anisotropy. I inverted the Barrett data volume for isotropic elastic property cubes, which I compared to the model volume in each symmetry-plane of an orthorhombic medium. If the stacked seismic data contained only the near offsets, post-stack inversion resolved acoustic impedances that closely matched the true model both within and outside of the reservoir zones at all well locations. Anisotropy most affected the far offsets, so muting them predictably enhanced the post-stack inversion. I maintained all offsets for pre-stack inversion, but a parabolic radon filter eliminated nonhyperbolic behavior (rather than nonhyperbolic moveout analysis) at far offsets. The pre-stack impedance attributes adequately described the vertical heterogeneity of the true model at a cross-validation well, but the inverted values increasingly relied on the initial model with depth. The inverted density estimates experienced notable oscillations relative to the initial model, particularly where steep contrasts in elastic properties occurred. Mismatch of the inverted elastic properties at the well locations can be attributed to noise, thin layering effects, band limitation, steep contrasts in elastic properties, AVO behavior stacked into the data, an inaccurate starting model, and the effects of anisotropy. The most significant sources of error include small-scale reflectivity and comprehensive filtering of nonhyperbolic phenomena. Away from the well locations, the isotropic inversion gave no visual indication of reservoir geobodies, but it sufficiently described the elastic property variations near reservoir mid-sections. Moreover, I showed that the inverted elastic properties differ from their orthorhombic models by no more than 35%. The greatest misfits occurred near reservoir contacts and geobody locations. The computed impedance models in each symmetry-plane have distinctive differences, but isotropic inversion dismisses these variations entirely. I conclude that isotropic inversion should not be a surrogate for orthorhombic methods in data preconditioning and quantitative reservoir characterization.Item Heterogeneous Reservoir Characterization Utilizing Efficient Geology Preserving Reservoir Parameterization through Higher Order Singular Value Decomposition (HOSVD)(2015-01-21) Afra, SardarPetroleum reservoir parameter inference is a challenging problem to many of the reservoir simulation work flows, especially when it comes to real reservoirs with high degree of complexity and non-linearity, and high dimensionality. In fact, the process of estimating a large number of unknowns in an inverse problem lead to a very costly computational effort. Moreover, it is very important to perform geologically consistent reservoir parameter adjustments as data is being assimilated in the history matching process, i.e., the process of adjusting the parameters of reservoir system in order to match the output of the reservoir model with the previous reservoir production data. As a matter of fact, it is of great interest to approximate reservoir petrophysical properties like permeability and porosity while reparameterizing these parameters through reduced-order models. As we will show, petroleum reservoir models are commonly described by in general complex, nonlinear, and large-scale, i.e., large number of states and unknown parameters. Thus, having a practical approach to reduce the number of reservoir parameters in order to reconstruct the reservoir model with a lower dimensionality is of high interest. Furthermore, de-correlating system parameters in all history matching and reservoir characterization problems keeping the geological description intact is paramount to control the ill-posedness of the system. In the first part of the present work, we will introduce the advantages of a novel parameterization method by means of higher order singular value decomposition analysis (HOSVD). We will show that HOSVD outperforms classical parameterization techniques with respect to computational and implementation cost. It also, provides more reliable and accurate predictions in the petroleum reservoir history matching problem due to its capability to preserve geological features of the reservoir parameter like permeability. The promising power of HOSVD is investigated through several synthetic and real petroleum reservoir benchmarks and all results are compared to that of classic SVD. In addition to the parameterization problem, we also addressed the ability of HOSVD in producing accurate production data comparing to those of original reservoir system. To generate the results of the present work, we employ a commercial reservoir simulator known as ECLIPSE. In the second part of the work, we will address the inverse modeling, i.e., the reservoir history matching problem. We employed the ensemble Kalman filter (EnKF) which is an ensemble-based characterization approach to solve the inverse problem. We also, integrate our new parameterization technique into the EnKF algorithm to study the suitability of HOSVD based parameterization for reducing the dimensionality of parameter space and for estimating geologically consistence permeability distributions. The results of the present work illustrates the characteristics of the proposed parameterization method by several numerical examples in the second part including synthetic and real reservoir benchmarks. Moreover, the HOSVD advantages are discussed by comparing its performance to the classic SVD (PCA) parameterization approach. In the first part of the present work, we will introduce the advantages of a novel parameterization method by means of higher order singular value decomposition analysis (HOSVD). We will show that HOSVD outperforms classical parameterization techniques with respect to computational and implementation cost. It also, provides more reliable and accurate predictions in the petroleum reservoir history matching problem due to its capability to preserve geological features of the reservoir parameter like permeability. The promising power of HOSVD is investigated through several synthetic and real petroleum reservoir benchmarks and all results are compared to that of classic SVD. In addition to the parameterization problem, we also addressed the ability of HOSVD in producing accurate production data comparing to those of original reservoir system. To generate the results of the present work, we employ a commercial reservoir simulator known as ECLIPSE. In the second part of the work, we will address the inverse modeling, i.e., the reservoir history matching problem. We employed the ensemble Kalman filter (EnKF) which is an ensemble-based characterization approach to solve the inverse problem. We also, integrate our new parameterization technique into the EnKF algorithm to study the suitability of HOSVD based parameterization for reducing the dimensionality of parameter space and for estimating geologically consistence permeability distributions. The results of the present work illustrate the characteristics of the proposed parameterization method by several numerical examples in the second part including synthetic and real reservoir benchmarks. Moreover, the HOSVD advantages are discussed by comparing its performance to the classic SVD (PCA) parameterization approach.Item Item and person parameter estimation using hierarchical generalized linear models and polytomous item response theory models(2003-05) Williams, Natasha Jayne; Koch, William R.; Beretvas, Susan NatashaItem Kinetics of Anionic Surfactant Anoxic Degradation(2010-07-14) Camacho, Julianna G.The biodegradation kinetics of Geropon TC-42 (trademark) by an acclimated culture was investigated in anoxic batch reactors to determine biokinetic coefficients to be implemented in two biofilm mathematical models. Geropon TC-42 (trademark) is the surfactant commonly used in space habitation. The two biofilm models differ in that one assumes a constant biofilm density and the other allows biofilm density changes based on space occupancy theory. Extant kinetic analysis of a mixed microbial culture using Geropon TC-42 (trademark) as sole carbon source was used to determine cell yield, specific growth rate, and the half-saturation constant for S0/X0 ratios of 4, 12.5, and 34.5. To estimate cell yield, linear regression analysis was performed on data obtained from three sets of simultaneous batch experiments for three S0/X0 ratios. The regressions showed non-zero intercepts, suggesting that cell multiplication is not possible at low substrate concentrations. Non-linear least-squares analysis of the integrated equation was used to estimate the specific growth rate and the half-saturation constant. Net specific growth rate dependence on substrate concentration indicates a self-inhibitory effect of Geropon TC-42 (trademark). The flow rate and the ratio of the concentrations of surfactant to nitrate were the factors that most affected the simulations. Higher flow rates resulted in a shorter hydraulic retention time, shorter startup periods, and faster approach to a steady-state biofilm. At steady-state, higher flow resulted in lower surfactant removal. Higher influent surfactant/nitrate concentration ratios caused a longer startup period, supported more surfactant utilization, and biofilm growth. Both models correlate to the empirical data. A model assuming constant biofilm density is computationally simpler and easier to implement. Therefore, a suitable anoxic packed bed reactor for the removal of the surfactant Geropon TC-42 (trademark) can be designed by using the estimated kinetic values and a model assuming constant biofilm density.Item Local feedback regularization of three-dimensional Navier-Stokes equations on bounded domains(Texas Tech University, 1997-05) Balogh, AndrasThe specific problem we consider here is inspired by recent advances in the control of nonlinear distributed parameter systems and its possible applications to hydrodynamics. The main objective is to investigate the extent to which the 3-dimensional Navier-Stokes system can be regularized using a particular, physically motivated, feedback control law. The specific choice of feedback mechanism is motivated by a work of O.A. Ladyzhenskaya [7] in which she introduces a modification of the Navier-Stokes equation on a three dimensional bounded domain and shows that the resulting perturbed system possesses global dynamics and, furthermore, this dynamics is stable. It is in this sense that we understand the system to be regularized.Item A method for parameter estimation and system identification for model based diagnostics(2010-12) Rengarajan, Sankar Bharathi; Bryant, Michael David; Akella, Maruthi R.Model based fault detection techniques utilize functional redundancies in the static and dynamic relationships among system inputs and outputs for fault detection and isolation. Analytical models based on the underlying physics of the system can capture the dependencies between different measured signals in terms of system states and parameters. These physical models of the system can be used as a tool to detect and isolate system faults. As a machine degrades, system outputs deviate from desired outputs, generating residuals defined by the error between sensor measurements and corresponding model simulated signals. These error residuals contain valuable information to interpret system states and parameters. Setting up the measurements from a faulty system as baseline, the parameters of the idealistic model can be varied to minimize these residuals. This process is called “Parameter Tuning”. A framework to automate this “Parameter Tuning” process is presented with a focus on DC motors and 3-phase induction motors. The parameter tuning module presented is a multi-tier module which is designed to operate on real system models that are highly non-linear. The tuning module combines artificial intelligence techniques like Quasi-Monte Carlo (QMC) sampling (Hammersley sequencing) and Genetic Algorithm (Non Dominated Sorting Genetic Algorithm) with an Extended Kalman filter (EKF), which utilizes the system dynamics information available via the physical models of the system. A tentative Graphical User Interface (GUI) was developed to simplify the interaction between a machine operator and the module. The tuning module was tested with real measurements from a DC motor. A simulation study was performed on a 3-phase induction motor by suitably adjusting parameters in an analytical model. The QMC sampling and genetic algorithm stages worked well even on measurement data with the system operating in steady state condition. But the downside was computational expense and inability to estimate the parameters online – ‘batch estimator’. The EKF module enabled online estimation where update was made based on incoming measurements. But observability of the system based on incoming measurements posed a major challenge while dealing with state estimation filters. Implementation details and results are included with plots comparing real and faulty systems.Item Modeling, simulation and experimental verification of contact/impact dynamics in flexible articulated structures(Texas Tech University, 1998-05) Hariharesan, SeralaathanRobots are used in diverse applications, ranging from entertainment to manufacturing to space applications. Each application has its own requirements in terms of performance, design and operating environment. Based on these requirements, a designer/researcher will have to design a robot that performs its designated task with maximum possible efficiency. Robots are widely used in manufacturing for machining, assembly line operations, welding, painting, inspection, etc. They are also used in a host of other areas like laboratories to place and remove test tubes in centrifuges and to handle hazardous chemicals. In the nuclear industry, they are used to handle radioactive fuel as well as radioactive waste. Robots are also used in remote or highly contaminated areas to measure radiation or toxic levels. Robots have also found their way into the field of agriculture. An interesting application is their use as a sheepshearing machine, where it is used to shear wool off sheep. There are submersible robotic vehicles used for deep sea exploration. These submersible vehicles are used for mining the ocean floor. Last, but not least, there is the space industry which uses robots in various forms. Robots in space applications usually face environments that are hostile to human survival. Planetary rovers with manipulator arms, satellite maintenance robots, manipulator arms for space manufacturing and construction of space stations and space ships and unmanned exploration vehicles are some of the applications of robots in space.Item Parameter Estimation of Dynamic Air-conditioning Component Models Using Limited Sensor Data(2011-08-08) Hariharan, NatarajkumarThis thesis presents an approach for identifying critical model parameters in dynamic air-conditioning systems using limited sensor information. The expansion valve model and the compressor model parameters play a crucial role in the system model's accuracy. In the past, these parameters have been estimated using a mass flow meter; however, this is an expensive devise and at times, impractical. In response to these constraints, a novel method to estimate the unknown parameters of the expansion valve model and the compressor model is developed. A gray box model obtained by augmenting the expansion valve model, the evaporator model, and the compressor model is used. Two numerical search algorithms, nonlinear least squares and Simplex search, are used to estimate the parameters of the expansion valve model and the compressor model. This parameter estimation is done by minimizing the error between the model output and the experimental systems output. Results demonstrate that the nonlinear least squares algorithm was more robust for this estimation problem than the Simplex search algorithm. In this thesis, two types of expansion valves, the Electronic Expansion Valve and the Thermostatic Expansion Valve, are considered. The Electronic Expansion Valve model is a static model due to its dynamics being much faster than the systems dynamics; the Thermostatic expansion valve model, however, is a dynamic one. The parameter estimation algorithm developed is validated on two different experimental systems to confirm the practicality of its approach. Knowing the model parameters accurately can lead to a better model for control and fault detection applications. In addition to parameter estimation, this thesis also provides and validates a simple usable mathematical model for the Thermostatic expansion valve.Item Risk-averse periodic preventive maintenance optimization(2011-08) Singh, Inderjeet,1978-; Popova, Elmira; Morton, David P.; Damien, Paul; Hasenbein, John J.; Kutanoglu, ErhanWe consider a class of periodic preventive maintenance (PM) optimization problems, for a single piece of equipment that deteriorates with time or use, and can be repaired upon failure, through corrective maintenance (CM). We develop analytical and simulation-based optimization models that seek an optimal periodic PM policy, which minimizes the sum of the expected total cost of PMs and the risk-averse cost of CMs, over a finite planning horizon. In the simulation-based models, we assume that both types of maintenance actions are imperfect, whereas our analytical models consider imperfect PMs with minimal CMs. The effectiveness of maintenance actions is modeled using age reduction factors. For a repairable unit of equipment, its virtual age, and not its calendar age, determines the associated failure rate. Therefore, two sets of parameters, one describing the effectiveness of maintenance actions, and the other that defines the underlying failure rate of a piece of equipment, are critical to our models. Under a given maintenance policy, the two sets of parameters and a virtual-age-based age-reduction model, completely define the failure process of a piece of equipment. In practice, the true failure rate, and exact quality of the maintenance actions, cannot be determined, and are often estimated from the equipment failure history. We use a Bayesian approach to parameter estimation, under which a random-walk-based Gibbs sampler provides posterior estimates for the parameters of interest. Our posterior estimates for a few datasets from the literature, are consistent with published results. Furthermore, our computational results successfully demonstrate that our Gibbs sampler is arguably the obvious choice over a general rejection sampling-based parameter estimation method, for this class of problems. We present a general simulation-based periodic PM optimization model, which uses the posterior estimates to simulate the number of operational equipment failures, under a given periodic PM policy. Optimal periodic PM policies, under the classical maximum likelihood (ML) and Bayesian estimates are obtained for a few datasets. Limitations of the ML approach are revealed for a dataset from the literature, in which the use of ML estimates of the parameters, in the maintenance optimization model, fails to capture a trivial optimal PM policy. Finally, we introduce a single-stage and a two-stage formulation of the risk-averse periodic PM optimization model, with imperfect PMs and minimal CMs. Such models apply to a class of complex equipment with many parts, operational failures of which are addressed by replacing or repairing a few parts, thereby not affecting the failure rate of the equipment under consideration. For general values of PM age reduction factors, we provide sufficient conditions to establish the convexity of the first and second moments of the number of failures, and the risk-averse expected total maintenance cost, over a finite planning horizon. For increasing Weibull rates and a general class of increasing and convex failure rates, we show that these convexity results are independent of the PM age reduction factors. In general, the optimal periodic PM policy under the single-stage model is no better than the optimal two-stage policy. But if PMs are assumed perfect, then we establish that the single-stage and the two-stage optimization models are equivalent.Item Scalable, adaptive methods for forward and inverse problems in continental-scale ice sheet modeling(2015-08) Isaac, Tobin Gregory; Ghattas, Omar N.; Stadler, Georg, Ph. D.; Arbogast, Todd; Biros, George; Catania, Ginny; Oden, John TinsleyProjecting the ice sheets' contribution to sea-level rise is difficult because of the complexity of accurately modeling ice sheet dynamics for the full polar ice sheets, because of the uncertainty in key, unobservable parameters governing those dynamics, and because quantifying the uncertainty in projections is necessary when determining the confidence to place in them. This work presents the formulation and solution of the Bayesian inverse problem of inferring, from observations, a probability distribution for the basal sliding parameter field beneath the Antarctic ice sheet. The basal sliding parameter is used within a high-fidelity nonlinear Stokes model of ice sheet dynamics. This model maps the parameters "forward" onto a velocity field that is compared against observations. Due to the continental-scale of the model, both the parameter field and the state variables of the forward problem have a large number of degrees of freedom: we consider discretizations in which the parameter has more than 1 million degrees of freedom. The Bayesian inverse problem is thus to characterize an implicitly defined distribution in a high-dimensional space. This is a computationally demanding problem that requires scalable and efficient numerical methods be used throughout: in discretizing the forward model; in solving the resulting nonlinear equations; in solving the Bayesian inverse problem; and in propagating the uncertainty encoded in the posterior distribution of the inverse problem forward onto important quantities of interest. To address discretization, a hybrid parallel adaptive mesh refinement format is designed and implemented for ice sheets that is suited to the large width-to-height aspect ratios of the polar ice sheets. An efficient solver for the nonlinear Stokes equations is designed for high-order, stable, mixed finite-element discretizations on these adaptively refined meshes. A Gaussian approximation of the posterior distribution of parameters is defined, whose mean and covariance can be efficiently and scalably computed using adjoint-based methods from PDE-constrained optimization. Using a low-rank approximation of the covariance of this distribution, the covariance of the parameter is pushed forward onto quantities of interest.Item Statistical Inference in Inverse Problems(2012-07-16) Xun, XiaoleiInverse problems have gained popularity in statistical research recently. This dissertation consists of two statistical inverse problems: a Bayesian approach to detection of small low emission sources on a large random background, and parameter estimation methods for partial differential equation (PDE) models. Source detection problem arises, for instance, in some homeland security applications. We address the problem of detecting presence and location of a small low emission source inside an object, when the background noise dominates. The goal is to reach the signal-to-noise ratio levels on the order of 10^-3. We develop a Bayesian approach to this problem in two-dimension. The method allows inference not only about the existence of the source, but also about its location. We derive Bayes factors for model selection and estimation of location based on Markov chain Monte Carlo simulation. A simulation study shows that with sufficiently high total emission level, our method can effectively locate the source. Differential equation (DE) models are widely used to model dynamic processes in many fields. The forward problem of solving equations for given parameters that define the DEs has been extensively studied in the past. However, the inverse problem of estimating parameters based on observed state variables is relatively sparse in the statistical literature, and this is especially the case for PDE models. We propose two joint modeling schemes to solve for constant parameters in PDEs: a parameter cascading method and a Bayesian treatment. In both methods, the unknown functions are expressed via basis function expansion. For the parameter cascading method, we develop the algorithm to estimate the parameters and derive a sandwich estimator of the covariance matrix. For the Bayesian method, we develop the joint model for data and the PDE, and describe how the Markov chain Monte Carlo technique is employed to make posterior inference. A straightforward two-stage method is to first fit the data and then to estimate parameters by the least square principle. The three approaches are illustrated using simulated examples and compared via simulation studies. Simulation results show that the proposed methods outperform the two-stage method.Item The detection and consequences of beta nonstationarity(Texas Tech University, 1986-12) Howe, Thomas StanleyThe development of return-generating models, some of which rely on beta, has provided a means of examining the abnormal performance of stock returns around the time of an event. One of the problems in using such models is that beta is apparently nonstationary. This study uses simulated daily stock returns to examine the ability of the cumulative sum of the squared recursive residuals (CSRR) and the Quandt log-likelihood ratio (QLLR) to identify a given level of beta change and the effect of a given level of beta change on the results of abnormal returns tests. This study also uses daily stock returns surrounding the listing and delisting of the firms' bonds on Standard and Poor's "CreditWatch" to examine the nature of capital asset pricing model (CAPM) parameter nonstationarity and the effect of allowing for CAPM parameter nonstationarity on abnormal returns test results for these firms. Analysis of the simulated security returns suggests that, given the range of error variances generally found in daily stock returns, the CSRR and QLLR show little ability to identify even a sudden 50 percent beta change and are highly sensitive to outliers. This low power appears not to present a problem. Even a 50 percent sudden beta change leaves the rejection frequencies and average p-values of the abnormal returns tests and the average abnormal return and mean square error of the CAPM regressions largely unchanged. In the CreditWatch samples, the CSRR indicates parameter nonstationarity for nearly every security over a 4-year period. Comparison of the CSRR results with the results of traditional parameter nonstationarity tests suggests that the significant CSRR findings are more often associated with heteroscedasticity or outliers than with a beta change. In the CreditWatch section, the cumulative average residuals appear sensitive to the periods used in estimating the CAPM parameters and the method used to allow for the apparent parameter nonstationarity. This sensitivity is apparently due primarily to instability in the alpha estimates.Item The detection and consequences of beta nonstationarity(Texas Tech University, 1986-12) Howe, Thomas StanleyNot availableItem Vehicle-terrain parameter estimation for small-scale robotic tracked vehicle(2010-12) Dar, Tehmoor Mehmoud; Longoria, Raul G.; Fahrenthold, Eric; Bryant, Michael D.; Fernandez, Benito; Wang, JunminMethods for estimating vehicle-terrain interaction parameters for small scale robotic vehicles have been formulated and evaluated using both simulation and experimental studies. A model basis was developed, guided by experimental studies with an iRobot PackBot. The intention was to demonstrate whether a nominally instrumented robotic vehicle could be used as a test platform for generating data for vehicle-terrain parameter estimation. A comprehensive skid-steered model was found to be sensitive enough to distinguish between various forms of unknown terrains. This simulation study also verified that the Bekker model for large scale vehicles adopted for this research was applicable to the small scale robotic vehicle used in this work. This fact was also confirmed by estimating coefficients of friction and establishing their dependence on forward velocity and turning radius as the vehicle traverses different terrains. On establishing that mobility measurements for this robotic were sufficiently sensitive, it was found that estimates could be made of key dynamic variables and vehicle-terrain interaction parameters. Four main contributions are described for reliably and robustly using PackBot data for vehicle-terrain property estimation. These estimation methods should contribute to efforts in improving mobility of small scale tracked vehicles on uncertain terrains. The approach is embodied in a multi-tiered algorithm based on the dynamic and kinematic models for skid-steering as well as tractive force models parameterized by key vehicle-terrain parameters. In order to estimate and characterize the key parameters, nonlinear estimation techniques such as the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), and a General Newton Raphson (GNR) method are integrated into this multi-tiered algorithm. A unique idea in using an EKF with an added State Noise Compensation algorithm is presented which shows its robustness and consistency in estimating slip variables and other parameters for deformable terrains. In the multi-tiered algorithm, a kinematic model of the robotic vehicle is used to estimate slip variables and turning radius. These estimated variables are stored in a truth table and used in a skid-steered dynamic model to estimate the coefficients of friction. The total estimated slip on the left and right track, along with the total tractive force computed using a motor model, are then used in the GNR algorithm to estimate the key vehicle-terrain parameters. These estimated parameters are cross-checked and confirmed with EKF estimation results. Further, these simulation results verify that the tracked vehicle tractive force is not dependent on cohesion for frictional soils. This sequential algorithm is shown to be effective in estimating vehicle-terrain interaction properties with relatively good accuracy. The estimated results obtained from UKF and EKF are verified and compared with available experimental data, and tested on a PackBot traversing specified terrains at the Southwest Research Institute (SwRI), Small Robotics Testbed in San Antonio, Texas. In the end, based on the development and evaluation of small scale vehicle testing, the effectiveness of on-board sensing methods and estimation techniques are also discussed for potential use in real time estimation of vehicle-terrain parameters.