Browsing by Subject "Statistical decision"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item A Brunswik lens investigation of the profile interpretation, pure statistical, and clinical synthesis methods of predicting absenteeism(Texas Tech University, 1985-08) Willoughby, Frederick WilliamThe ability of humans to make judgments and predictions about their environment has been studied in the laboratory through a variety of tasks (e.g., making diagnoses, predicting improvement in therapy, job performance, dangerousness to self and others, and recidivism). The accuracy of human (clinical) prediction in comparison to actuarially-derived (statistical) prediction has been heavily debated since Paul Meehl's controversial publication Clinical versus Statistical Prediction in 1954. Support for the superiority of statistical prediction methods over clinical prediction methods has been overwhelming. Researchers have suggested that further research should focus on developing procedures that would enable the human judge to make more accurate predictions. The present study investigated human judgment in the task of managers predicting absenteeism of employees. Three hypotheses were proposed. The first hypothesis predicted that strict application of Bayes' formula (pure statistical method of prediction) would significantly exceed the predictive accuracy of managers who obtained training in Bayes' formula (clinical synthesis method) as well as managers who did not receive this training (profile interpretation method). The managers in the clinical synthesis condition, however, were expected to be significantly more accurate than managers in the profile interpretation condition. The second hypothesis predicted that managers in the clinical synthesis condition would have significantly higher achievement, consistency, and matching indices as proposed by the Brunswik lens model, than managers in the profile interpretation condition. Finally, in the third hypothesis, it was predicted that managers in the clinical synthesis condition would be more appropriately confident in the accuracy of their predictions (i.e., better calibrated) than managers in the profile interpretation condition. The results found that there were no differences between the pure statistical and clinical synthesis methods of prediction; these two methods, however, were significantly more accurate than the profile interpretation method. Secondly, the clinical synthesis condition resulted in a significantly higher achievement index than the profile interpretation condition. Finally, managers in the clinical synthesis condition were better calibrated than managers in the profile interpretation condition. Discussion of these results included possible explanations of the findings and suggestions for further research.Item Adaptive hierarchical classification with limited training data(2002) Morgan, Joseph Troy; Crawford, Melba M.This research focused on the development of a hierarchical approach for classification that is robust with respect to training data that are limited both in quantity and spatial extent. Many difficult classification problems involve a high dimensional input and output space (candidate labels). Due to the "curse of dimensionality," it is necessary to reduce the size of the input space when there is only a limited quantity of training data available. While a significant amount of research has focused on transforming the input space into a reduced feature space that accurately discriminates between the classes in a fixed output space, traditional approaches fail to capitalize on the domain knowledge and flexibility gained by transforming the feature space and the output space simultaneously. A new approach is proposed that utilizes domain knowledge, which is automatically discovered from the data, to combat the "small sample size" problem. Spatially limited training data can result in poor inference concerning the true populations. The detrimental impact that can result if this issue is ignored is explored and demonstrated. Transferal of information that was previously acquired is used to update the signatures with the new clusters if the hypothesis that the new clusters are indeed just deformed versions of what already exists in the spectral library is accepted. Independent of limited training data, both in terms of the spatial implications and limited quantity, different sampling subsets of the same ground truth may result in slightly different classifiers. This issue has not been addressed rigorously. The advantages gained by using an ensemble of classifiers built from sub-samples of training data are widely acknowledged but have not previously been used in the context of a hierarchical classifier for remote sensing data or for hyperspectral data in general. The ensemble of classifiers is used to identify a suitable level of the tree for situations where the resolution of the output space cannot be supported. Further decisions of how the classification structure should be adapted and at what level need to be made are explored. Furthermore, pseudolabeled data are utilized to improve classification results at that level of resolution.Item Application of empirical Bayes decision procedures to discrete time linear filtering(Texas Tech University, 1970-12) Kamat, Satish JanardanNot availableItem Comparison of location estimators using Banks' criterion(Texas Tech University, 2004-08) Karunaratne, H. Susitha INot availableItem Inevitable disappointment and decision making based on forecasts(2006) Chen, Min; Dyer, JamesItem Optimum stratified sampling using prior information(Texas Tech University, 1988-08) Koti, Kallappa MThe stratified sample allocation problem using prior information concerning strata variances, is considered. Given k random variables Xi, X2, • • •, Xk on a probability space, a Borel measurable function X of Xi, X2, • • •, X^, called a maximal utility function, is defined. A rigorous derivation of its expected value is presented. The definition and expected value of X are repeatedly used to formulate the objective functions used to solve the stratified sample allocation problem. The resulting allocations are called minimax allocations. Assuming prior information in the form of a distribution function on strata variances, a noninformative design which happens to be an alternative to Aggarwal's (1958) allocation, is proposed. If prior information concerning strata coefficients of variation is available, a minimax sampling strategy based on Searis' (1964) work, is presented. Under a normal superpopulation model, assuming locally uniform prior distributions on strata means and variances, two-phase minimax allocations comparable with that of Draper et al. (1968) are developed. Several numerical examples are given to illustrate and compare minimax allocation procedure with other existing procedures.Item Prioritization and optimization in stochastic network interdiction problems(2008-12) Michalopoulos, Dennis Paul, 1979-; Barnes, J. Wesley; Morton, David P.The goal of a network interdiction problem is to model competitive decision-making between two parties with opposing goals. The simplest interdiction problem is a bilevel model consisting of an 'adversary' and an interdictor. In this setting, the interdictor first expends resources to optimally disrupt the network operations of the adversary. The adversary subsequently optimizes in the residual interdicted network. In particular, this dissertation considers an interdiction problem in which the interdictor places radiation detectors on a transportation network in order to minimize the probability that a smuggler of nuclear material can avoid detection. A particular area of interest in stochastic network interdiction problems (SNIPs) is the application of so-called prioritized decision-making. The motivation for this framework is as follows: In many real-world settings, decisions must be made now under uncertain resource levels, e.g., interdiction budgets, available man-hours, or any other resource depending on the problem setting. Applying this idea to the stochastic network interdiction setting, the solution to the prioritized SNIP (PrSNIP) is a rank-ordered list of locations to interdict, ranked from highest to lowest importance. It is well known in the operations research literature that stochastic integer programs are among the most difficult optimization problems to solve. Even for modest levels of uncertainty, commercial integer programming solvers can have difficulty solving models such as PrSNIP. However, metaheuristic and large-scale mathematical programming algorithms are often effective in solving instances from this class of difficult optimization problems. The goal of this doctoral research is to investigate different methods for modeling and solving SNIPs (optimization) and PrSNIPs (prioritization via optimization). We develop a number of different prioritized and unprioritized models, as well as exact and heuristic algorithms for solving each problem type. The mathematical programming algorithms that we consider are based on row and column generation techniques, and our heuristic approach uses adaptive tabu search to quickly find near-optimal solutions. Finally, we develop a group of hybrid algorithms that combine various elements of both classes of algorithms.Item Smooth empirical Bayes estimation with application to the Weibull distribution(Texas Tech University, 1970-05) Bennett, G. KembleThe type of decision problem to be considered in this dissertation can best be illustrated by an example. Consider the development program for a particular solid propellant- rocket engine which must "burn" for a specified time. In this program, certain points exist at which progress is monitored. For instance, the Pre-Plight Rating Test program would be one such point at the culmination of the initial R&D program, demonstrating the ability of a sample of engines to perform for a specified length of time. After this phase a new phase is entered in which flight and static tests are performed, and if needed, a more refined system configuration is developed. Finally, design is frozen, and a Qualification Test program is undertaken to demonstrate the suitability of the engine system. During this period in the program, several groups of engines are test-fired, and due to stringent reliability requirements, a large sample of engines is required.