Browsing by Subject "Sampling"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item A Single-Sample Rectifying Inspection Plan for Unknown Incoming Quality Distributions(Texas Tech University, 1971-12) Porter, John G. D.Not Available.Item An investigation of the effects of incoming quality and inspection rate on inspector accuracy(1967-06) Sosnowy, John KennethNot AvailableItem An investigation of the effects of two types of inspector error on sampling inspection plans(Texas Tech University, 1967-06) McKnight, Kenneth AlanThe field of statistical quality control plays an important part in the manufacture of consumer, industrial, and military products. Sampling inspection constitutes one of the most important areas of statistical quality control. In order to provide an alternative to one hundred percent inspection, especially in situations where testing is of a destructive or exceedingly expensive nature, statistical sampling procedures have been developed. Generally, inspection procedures are performed disregarding any possible effects of human involvement, even though thesuccessful operation of inspection plans may depend to a large extent on a human inspector. The involvement of the inspector may vary from recording inspection machine data to making a subjective evaluation of product quality. The classical formulations of sampling inspection plans assume that the human inspector is always perfect, when in reality the human inspector is capable of making two types of errors--he may classify a good item as bad or he may classify a bad item as good.Item Bayesian Analysis of Transposon Mutagenesis Data(2012-07-16) DeJesus, Michael A.Determining which genes are essential for growth of a bacterial organism is an important question to answer as it is useful for the discovery of drugs that inhibit critical biological functions of a pathogen. To evaluate essentiality, biologists often use transposon mutagenesis to disrupt genomic regions within an organism, revealing which genes are able to withstand disruption and are therefore not required for growth. The development of next-generation sequencing technology augments transposon mutagenesis by providing high-resolution sequence data that identifies the exact location of transposon insertions in the genome. Although this high-resolution information has already been used to assess essentiality at a genome-wide scale, no formal statistical model has been developed capable of quantifying significance. This thesis presents a formal Bayesian framework for analyzing sequence information obtained from transposon mutagenesis experiments. Our method assesses the statistical significance of gaps in transposon coverage that are indicative of essential regions through a Gumbel distribution, and utilizes a Metropolis-Hastings sampling procedure to obtain posterior estimates of the probability of essentiality for each gene. We apply our method to libraries of M. tuberculosis transposon mutants, to identify genes essential for growth in vitro, and show concordance with previous essentiality results based on hybridization. Furthermore, we show how our method is capable of identifying essential domains within genes, by detecting significant sub-regions of open-reading frames unable to withstand disruption. We show that several genes involved in PG biosynthesis have essential domains.Item History matching and uncertainty quantificiation using sampling method(2009-05-15) Ma, XianlinUncertainty quantification involves sampling the reservoir parameters correctly from a posterior probability function that is conditioned to both static and dynamic data. Rigorous sampling methods like Markov Chain Monte Carlo (MCMC) are known to sample from the distribution but can be computationally prohibitive for high resolution reservoir models. Approximate sampling methods are more efficient but less rigorous for nonlinear inverse problems. There is a need for an efficient and rigorous approach to uncertainty quantification for the nonlinear inverse problems. First, we propose a two-stage MCMC approach using sensitivities for quantifying uncertainty in history matching geological models. In the first stage, we compute the acceptance probability for a proposed change in reservoir parameters based on a linearized approximation to flow simulation in a small neighborhood of the previously computed dynamic data. In the second stage, those proposals that passed a selected criterion of the first stage are assessed by running full flow simulations to assure the rigorousness. Second, we propose a two-stage MCMC approach using response surface models for quantifying uncertainty. The formulation allows us to history match three-phase flow simultaneously. The built response exists independently of expensive flow simulation, and provides efficient samples for the reservoir simulation and MCMC in the second stage. Third, we propose a two-stage MCMC approach using upscaling and non-parametric regressions for quantifying uncertainty. A coarse grid model acts as a surrogate for the fine grid model by flow-based upscaling. The response correction of the coarse-scale model is performed by error modeling via the non-parametric regression to approximate the response of the computationally expensive fine-scale model. Our proposed two-stage sampling approaches are computationally efficient and rigorous with a significantly higher acceptance rate compared to traditional MCMC algorithms. Finally, we developed a coarsening algorithm to determine an optimal reservoir simulation grid by grouping fine scale layers in such a way that the heterogeneity measure of a defined static property is minimized within the layers. The optimal number of layers is then selected based on a statistical analysis. The power and utility of our approaches have been demonstrated using both synthetic and field examples.Item Probabilistic bicriteria models : sampling methodologies and solution strategies(2010-08) Rengarajan, Tara; Morton, David P.; Hasenbein, John J.; Kutanoglu, Erhan; Muthuraman, Kumar; Popova, ElmiraMany complex systems involve simultaneous optimization of two or more criteria, with uncertainty of system parameters being a key driver in decision making. In this thesis, we consider probabilistic bicriteria models in which we seek to operate a system reliably, keeping operating costs low at the same time. High reliability translates into low risk of uncertain events that can adversely impact the system. In bicriteria decision making, a good solution must, at the very least, have the property that the criteria cannot both be improved relative to it. The problem of identifying a broad spectrum of such solutions can be highly involved with no analytical or robust numerical techniques readily available, particularly when the system involves nontrivial stochastics. This thesis serves as a step in the direction of addressing this issue. We show how to construct approximate solutions using Monte Carlo sampling, that are sufficiently close to optimal, easily calculable and subject to a low margin of error. Our approximations can be used in bicriteria decision making across several domains that involve significant risk such as finance, logistics and revenue management. As a first approach, we place a premium on a low risk threshold, and examine the effects of a sampling technique that guarantees a prespecified upper bound on risk. Our model incorporates a novel construct in the form of an uncertain disrupting event whose time and magnitude of occurrence are both random. We show that stratifying the sample observations in an optimal way can yield savings of a high order. We also demonstrate the existence of generalized stratification techniques which enjoy this property, and which can be used without full distributional knowledge of the parameters that govern the time of disruption. Our work thus provides a computationally tractable approach for solving a wide range of bicriteria models via sampling with a probabilistic guarantee on risk. Improved proximity to the efficient frontier is illustrated in the context of a perishable inventory problem. In contrast to this approach, we next aim to solve a bicriteria facility sizing model, in which risk is the probability the system fails to jointly satisfy a vector-valued random demand. Here, instead of seeking a probabilistic guarantee on risk, we instead seek to approximate well the efficient frontier for a range of risk levels of interest. Replacing the risk measure with an empirical measure induced by a random sample, we proceed to solve a family of parametric chance-constrained and cost-constrained models. These two sampling-based approximations differ substantially in terms of what is known regarding their asymptotic behavior, their computational tractability, and even their feasibility as compared to the underlying "true" family of models. We establish however, that in the bicriteria setting we have the freedom to employ either the chance-constrained or cost-constrained family of models, improving our ability to characterize the quality of the efficient frontiers arising from these sampling-based approximations, and improving our ability to solve the approximating model itself. Our computational results reinforce the need for such flexibility, and enable us to understand the behavior of confidence bounds for the efficient frontier. As a final step, we further study the efficient frontier in the cost versus risk tradeoff for the facility sizing model in the special case in which the (cumulative) distribution function of the underlying demand vector is concave in a region defined by a highly-reliable system. In this case, the "true" efficient frontier is convex. We show that the convex hull of the efficient frontier of a sampling-based approximation: (i) can be computed in strongly polynomial time by relying on a reformulation as a max-flow problem via the well-studied selection problem; and, (ii) converges uniformly to the true efficient frontier, when the latter is convex. We conclude with numerical studies that demonstrate the aforementioned properties.Item Public news network: digital sampling to create a hybrid media feed(Texas A&M University, 2004-09-30) Stenner, Jack EricA software application called Public News Network (PNN) is created in this thesis, which functions to produce an aesthetic experience in the viewer. The application engenders this experience by presenting a three-dimensional virtual world that the viewer can navigate using the computer mouse and keyboard. As the viewer navigates the environment she sees irregularly shaped objects resting on an infinite ground plane, and hears an ethereal wind. As the viewer nears the objects, the sound transforms into the sound of television static and text is displayed which identifies this object as representative of an episode of the evening news. The viewer "touches" the episode and a "disembodied" transcript of the broadcast begins to scroll across the screen. With further interaction, video of the broadcast streams across the surfaces of the environment, distorted by the shapes upon which it flows. The viewer can further manipulate and repurpose the broadcast by searching for words contained within the transcript. The results of this search are reassembled into a new, re-contextualized display of video containing the search terms stripped from their original, pre-packaged context. It is this willful manipulation that completes the opportunity for true meaning to appear.Item Reinforcement Learning Control with Approximation of Time-Dependent Agent Dynamics(2013-04-30) Kirkpatrick, KentonReinforcement Learning has received a lot of attention over the years for systems ranging from static game playing to dynamic system control. Using Reinforcement Learning for control of dynamical systems provides the benefit of learning a control policy without needing a model of the dynamics. This opens the possibility of controlling systems for which the dynamics are unknown, but Reinforcement Learning methods like Q-learning do not explicitly account for time. In dynamical systems, time-dependent characteristics can have a significant effect on the control of the system, so it is necessary to account for system time dynamics while not having to rely on a predetermined model for the system. In this dissertation, algorithms are investigated for expanding the Q-learning algorithm to account for the learning of sampling rates and dynamics approximations. For determining a proper sampling rate, it is desired to find the largest sample time that still allows the learning agent to control the system to goal achievement. An algorithm called Sampled-Data Q-learning is introduced for determining both this sample time and the control policy associated with that sampling rate. Results show that the algorithm is capable of achieving a desired sampling rate that allows for system control while not sampling ?as fast as possible?. Determining an approximation of an agent?s dynamics can be beneficial for the control of hierarchical multiagent systems by allowing a high-level supervisor to use the dynamics approximations for task allocation decisions. To this end, algorithms are investigated for learning first- and second-order dynamics approximations. These algorithms are respectively called First-Order Dynamics Learning and Second-Order Dynamics Learning. The dynamics learning algorithms are evaluated on several examples that show their capability to learn accurate approximations of state dynamics. All of these algorithms are then evaluated on hierarchical multiagent systems for determining task allocation. The results show that the algorithms successfully determine appropriated sample times and accurate dynamics approximations for the agents investigated.Item Systematic Sampling of Scanning Lidar Swaths(2011-02-22) Marcell, Wesley TylerProof of concept lidar research has, to date, examined wall-to-wall models of forest ecosystems. While these studies have been important for verifying lidars efficacy for forest surveys, complete coverage is likely not the most cost effective means of using lidar as auxiliary data for operational surveys; sampling of some sort being the better alternative. This study examines the effectiveness of sampling with high point-density scanning lidar data and shows that systematic sampling is a better alternative to simple random sampling. It examines the bias and mean squared error of various estimators, and concludes that a linear-trend-based and especially an autocorrelation-assisted variance estimator perform better than the commonly used simple random sampling based-estimator when sampling is systematic.Item The sensitivity of sampling inspection to inspector error(Texas Tech University, 1966-05) Davis, Allan StevensNot available