Browsing by Subject "Probability"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
Item Archimedes, Gauss and Stochastic computation: A new (old) approach to Fast Algorithms for the evaluation of transcendental functions of generalized Polynomial Chaos Expansions(2011-05) Mckale, Kaleb D.; Long, Kevin; Howle, Victoria E.; Barnard, Roger W.; Monico, Christopher J.In this paper, we extend the work of Debusschere et al. (2004) by introducing a new approach to evaluating transcendental functions of generalized polynomial chaos expansions. We derive the elementary algebraic operations for the generalized PC expansions and show how these operations can be extended to polynomial and rational functions of PC expansions. We introduce and implement the Borchardt-Gauss Algorithm, an Arithmetic-Geometric Mean (AGM)-type method to derive the arctangent for the Jacobi-Chaos expansion. We compare numerically the BG Algorithm versus the Line Integral Method of Debusschere et al. and the Non-intrusive Spectral Projection (NISP) Method. We present the future direction of our research, including incorporating more efficient AGM-type methods proposed by Carlson (1972) and Brent (1976) to calculate the arctangent and other transcendental functions.Item Combinatorial and probabilistic techniques in harmonic analysis(2012-05) Lewko, Mark J., 1983-; Vaaler, Jeffrey D.; Beckner, William; Pavlovic, Natasa; Rodriguez-Villegas, Fernando; Zuckerman, DavidWe prove several theorems in the intersection of harmonic analysis, combinatorics, probability and number theory. In the second section we use combinatorial methods to construct various sets with pathological combinatorial properties. In particular, we answer a question of P. Erdos and V. Sos regarding unions of Sidon sets. In the third section we use incidence bounds and bilinear methods to prove several new endpoint restriction estimates for the Paraboloid over finite fields. In the fourth and fifth sections we study a variational maximal operators associated to orthonormal systems. Here we use probabilistic techniques to construct well-behaved rearrangements and base changes. In the sixth section we apply our variational estimates to a problem in sieve theory. In the seventh section, motivated by applications to sieve theory, we disprove a maximal inequality related to multiplicative characters.Item The effects of payoffs and feedback on the disambiguation of relative clauses(2014-12) Chacartegui Quetglas, Luis; Bannard, ColinThis dissertation investigates two facts about language processing. The Good Enough Approach claims that language users do not form a fully detailed representation of the input unless the task at hand requires it. On the other hand it has been shown that language users display internal preferences when they are faced with ambiguous input, as to what direction disambiguation should take. It has been proposed that these preferences are based on previous experience with similar inputs. This thesis investigates these two issues using tools from the fields of decision making and reinforcement learning. Specifically feedback and payoffs associated with sentence interpretations are manipulated to explore reading behavior, understood as a process of information seeking, and disambiguation choices. In four eye-tracking-reading experiments, the experimental stimuli are sentences containing a relative clause attachment ambiguity. Experiment 1 investigates whether the combination of the degree of ambiguity of a sentence and the possible payoffs, affect people’s reading times for the potentially ambiguous parts of a sentence, as well as their disambiguation choices. Experiment 2 investigates the role of feedback in such processes, a combination related to expected utility maximization. Experiment 3 studies how participants learn from feedback under risky or non-risky conditions. The last experiment investigates whether participants adjust their responses to evidence provided by feedback even overriding their internal initial bias towards a default response.Item Fender system behavior in random seas(2009-05-15) Ofoegbu, James NwachukwuFendering systems are widely used in offshore installations for attenuating the effects of the impact energy of ships and barges in berthing or moored conditions. This study focuses on investigating current design practices and, developing a rational and functional approach to address random loading effects exerted on fendering systems. These loadings are often a consequence of combined wind, wave and current excitation as well as more controlled vessel motions. Dimensional analysis is used to investigate the degree to which empirical design data can be collapsed and to provide an indication of the nonlinearity associated with the empirical data for fender sizing. In addition, model test data specifically measuring the normal fender force for a coupled mini-TLP/Tender Barge performed at the Offshore Technology Research Center (OTRC) model basin is used in this research investigation.. This data was characterized in terms of the typical statistical moments, which include the mean, standard deviation, skewness and kurtosis. The maxima and extreme values are extracted from the fender response data based upon a zero-crossing analysis and the results were studied in order to determine the underlying probability distribution function. Using selected parameter estimation techniques, coefficients of a best-fit two parameter model were determined. An illustrative example is presented and discussed that contrasts the deterministic and probabilistic models.Item Greedy structure learning of Markov Random Fields(2011-08) Johnson, Christopher Carroll; Ravikumar, Pradeep; Dhillon, InderjitProbabilistic graphical models are used in a variety of domains to capture and represent general dependencies in joint probability distributions. In this document we examine the problem of learning the structure of an undirected graphical model, also called a Markov Random Field (MRF), given a set of independent and identically distributed (i.i.d.) samples. Specifically, we introduce an adaptive forward-backward greedy algorithm for learning the structure of a discrete, pairwise MRF given a high dimensional set of i.i.d. samples. The algorithm works by greedily estimating the neighborhood of each node independently through a series of forward and backward steps. By imposing a restricted strong convexity condition on the structure of the learned graph we show that the structure can be fully learned with high probability given $n=\Omega(d\log (p))$ samples where $d$ is the dimension of the graph and $p$ is the number of nodes. This is a significant improvement over existing convex-optimization based algorithms that require a sample complexity of $n=\Omega(d^2\log(p))$ and a stronger irrepresentability condition. We further support these claims with an empirical comparison of the greedy algorithm to node-wise $\ell_1$-regularized logistic regression as well as provide a real data analysis of the greedy algorithm using the Audioscrobbler music listener dataset. The results of this document provide an additional representation of work submitted by A. Jalali, C. Johnson, and P. Ravikumar to NIPS 2011.Item Knowledge and understanding of probability and statistics topics by preservice PK-8 teachers(Texas A&M University, 2005-11-01) Carter, Tamara AnthonyGiven the importance placed on probability and statistics in the PK-8 curriculum by the National Council of Teachers of Mathematics (2000) and on teachers by the Interstate New Teacher Assessment and Support Consortium (1995) and the Conference Board of the Mathematical Sciences (2001), it is important to know how well preservice teachers understand topics that are vital to a thorough understanding of the probability and statistics topics emphasized by national standards. It is necessary for a teacher to thoroughly understand the subject matter in order to teach effectively, but that is not sufficient. A teacher must also be able to successfully communicate with the students about that material. Therefore, this study utilized a standards- and literature-based assessment to study 210 preservice teachers with the goal of taking the first step in determining whether current PK-8 preservice teachers are prepared to teach select probability and statistics topics specified in standards documents. The assessment contains 11 probability and statistics items with a total of 23 parts in a variety of shortanswer, multiple-choice, and extended-response formats. It is described in detail in Chapter III and reproduced in Appendix A. A confirmatory factor analysis indicated that for this sample of PK-8 preservice teachers, the assessment measured the underlying constructs on which it was based. Preservice teachers?? ability to answer these items varied greatly. For short-answer and multiple-choice items, the percentage of preservice teachers incorrectly answering an item was as high as 87% and as low as 18%. For extended-response items, incorrect answers were provided by as few as 12% of the participants on one item and by as many as 83% on another. Individual responses were analyzed to illustrate correct conceptions and misconceptions of these preservice teachers. There was not a statistically significant difference between responses based on the grade band the participants were preparing to teach, but students specializing in mathematics and science did perform better than other participants. Although effect sizes were small, the amount of time elapsed since an elementary statistics class was taken and the number of methods courses taken were positively associated with performance on this assessment.Item LRFD Calibration of Bridge Foundations Subjected to Scour and Risk Analysis(2013-04-30) Yao, CongpuBridge scour is the loss of soil by erosion due to water flowing around bridge supports. Scour has been the number one cause of bridge collapse in the United States with an average rate of 22 bridges collapsing each year. This dissertation addresses three topics related to bridge scour. First, three sets of databases are used to quantify the statistical parameters associated with the scatter between the predicted and measured scour depth as well as the probability that a deterministically predicted scour depth will be exceeded. The analysis results from these databases will also be used to provide the bias factors in the scour depth predictions in practice. In the second part of the dissertation, these statistical parameters are used to develop a reliability-based Load and Resistance Factor Design (LRFD) for shallow and deep foundations subjected to scour. The goal is to provide a design procedure for the bridge foundations, where the reliability of the foundation is the same with or without scour. For shallow foundations, the key of the design issue is the location of the foundation depth and the probability that the scour depth will exceed the foundation depth. Therefore, for shallow foundations, the proposed LRFD calibration is based on the probability of exceedance of the predicted scour depth. However for deep foundations, the key of the design issue is the resistance factor associated with the axial capacity of a pile. Hence, the proposed LRFD calibration for deep foundations is based on a reliability analysis using First-Order Reliability Method (FORM). The dissertation is broadened in the third part by analyzing he risk associated with bridge scour, where the risk is defined as the probability of failure times the value of the consequences. In the third part, the risk associated with bridge scour is compared to risks associated with other engineering structures as well. Target values of acceptable risk are recommended as part of the conclusions. The outcome of the research will modify the current ?AASHTO LRFD Bridge Design Specifications? developed by the American Association of State Highway and Transportation Officials (AASHTO) and help the practitioners design foundations of bridges over rivers for a uniform probability of failure in the case of scour. The risk of bridge scour is also quantified in the dissertation, and compared with common societal risks and civil engineering risks. It will help engineers understand the risk level associated with bridge scour.Item Probabilistic models and reliability analysis of scour depth around bridge piers(2009-06-02) Bolduc, Laura ChristineScour at a bridge pier is the formation of a hole around the pier due to the erosion of soil by flowing water; this hole in the soil reduces the carrying capacity of the foundation and the pier. Excessive scour can cause a bridge pier to fail without warning. Current predictions of the depth of the scour hole around a bridge pier are based on deterministic models. This paper considers two alternative deterministic models to predict scour depth. For each deterministic model, a corresponding probabilistic model is constructed using a Bayesian statistical approach and available field and experimental data. The developed probabilistic models account for the estimate bias in the deterministic models and for the model uncertainty. Parameters from both prediction models are compared to determine their accuracy. The developed probabilistic models are used to estimate the probability of exceedance of scour depth around bridge piers. The method is demonstrated on an example bridge pier. The values of the model parameters suggest that the maximum sour depth predicted by the deterministic HEC-18 Sand and HEC-18 Clay models tend to be conservative. Evidence is also found that the applicability of the HEC-18 Clay method is not limited to clay but can also be used for other soil types. The main advantage of the HEC-18 Clay method with respect to the HEC-18 Sand method is that it predicts the depth of scour as a function of time and can be used to estimate the final scour at the end of the design life of a structure. The paper addresses model uncertainties for given hydrologic variables. Hydrologic uncertainties have been presented in a separate paper.Item Quantifying and mitigating wind power variability(2015-12) Niu, Yichuan; Santoso, Surya; Arapostathis, Aristotle; Baldick, Ross; Longoria, Raul G.; Tiwari, MohitUnderstanding variability and unpredictability of wind power is essential for improving power system reliability and energy dispatch in transmission and distribution systems. The research presented herein intends to address a major challenge in managing and utilizing wind energy with mitigated fluctuation and intermittency. Caused by the varying wind speed, power variability can be explained as power imbalances. These imbalances create power surplus or deficiency in respect to the desired demand. To ameliorate the aforementioned issue, the fluctuating wind energy needs to be properly quantified, controlled, and re-distributed to the grid. The first major study in this dissertations is to develop accurate wind turbine models and model reductions to generate wind power time-series in a laboratory time-efficient manner. Reliable wind turbine models can also perform power control events and acquire dynamic responses more realistic to a real-world condition. Therefore, a Type 4 direct-drive wind turbine with power electronic converters has been modeled and designed with detailed aerodynamic and electric parameters based on a given generator. Later, using averaging and approximation techniques for power electronic circuits, the order of the original model is lowered to boost the computational efficiency for simulating long-term wind speed data. To quantify the wind power time-series, efforts are made to enhance adaptability and robustness of the original conditional range metric (CRM) algorithm that has been proposed by literatures for quantitatively assessing the power variability within a certain time frame. The improved CRM performs better under scarce and noisy time-series data with a reduced computational complexity. Rather than using a discrete probability model, the improved method implements a continuous gamma distribution with parameters estimated by the maximum likelihood estimators. With the leverage from the aforementioned work, a wind farm level behavior can be revealed by analyzing the data through long-term simulations using individual wind turbine models. Mitigating the power variability by reserved generation sources is attempted and the generation scenarios are generalized using an unsupervised machine learning algorithm regarding power correlations of those individual wind turbines. A systematic blueprint for reducing intra-hour power variations via coordinating a fast- and a slow- response energy storage systems (ESS) has been proposed. Methods for sizing, coordination control, ESS regulation, and power dispatch schemes are illustrated in detail. Applying the real-world data, these methods have been demonstrated desirable for reducing short-term wind power variability to an expected level.Item “Should I switch?” Controversies created by an advice column(2010-08) Lehman, Sandra Elizabeth; Daniels, Mark L.; Armendáriz, Efraim P.In 1990’s, the circumstances of being a contestant on a popular game show were published in a trendy question and answer column in Parade Magazine. If contestant switched from the initial choice to a second choice offer by the host, would the chances of winning the desired prize be increase? The columnist’s response to the reader sparked a good deal of controversy among mathematicians. Shortly after the publication of this answer, articles appeared in various mathematical publications some supporting and some refuting the columnist’s answer. This document reports the results of research into the controversy generated by some of the probability problems used on Let’s Make a Deal game show. Using a variety of approaches and assumption, the author attempts to formulate mathematical proof to explain the correct answer to the contestant’s question, “Should I switch?”Item Space Weather Effects on Imaging Detectors in Low Earth Orbit(2010-10-12) Johnson, Adam AlanThe objective of this research is the statistical study of space weather e ects on im- age detectors in Low Earth Orbit. The Hubble Space Telescope is used as a resource for acquiring proton a ected images for statistical analysis. For the purpose of the present work, the space weather environment will consist of cosmic as well as solar proton particles. The proton occurrences evident in images from the Hubble Charge Coupled Device (CCD) have been used to calculate the probability of proton events, which is related to the local space weather particle ux. The proton particles transfer energy to the CCD silicon, which ultimately results in measured signal that is not originating from photon illumination. The signal due to the proton interactions is rst separated from the noise contribution and subsequently used in the determi- nation of a pulse height probability distribution. Separation of the noise from the proton events also leads to the measurement of proton streak lengths and orientations along with the associated probability distributions. The directionality of the space weather environment in Low Earth Orbit is examined using the distribution of proton streak angles. Statistics found from the Hubble are also used as a starting point for simulations that create synthetic proton signal images. The distributions resulting from the Hubble CCD analysis give the probability of the: number of proton events, which is related to the ux of the space weather protons; energy of proton events, which allows estimates of damaging proton interactions; length of proton streaks on the CCD, which shows the relative probability of a long traversing proton event; angle of proton event, which indicates the directionality of the space weather environment.