Browsing by Subject "Sampling (Statistics)"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item An empirical study of audit sampling problems(Texas Tech University, 1986-08) Tatum, Kay WardStatement on Auditing Standards (SAS) No. 39 "Audit Sampling" was issued in June 1981. Several events associated with its issuance suggested that auditors possibly were experiencing various problems implementing its requirements. The specific objectives of this study were to: 1. Determine the major audit sampling problems in current audits based on frequency of occurrence 2. Determine if the frequency of audit sampling problems was related to a statistical versus a nonstatistical approach or a high versus low level of continuing professional education (CPE) 3. Determine if the frequency of audit sampling problems in current and past audits was different 4. Determine the nature and extent of audit sampling methods in current audits 5. Compare the frequency of audit sampling problems and methods in compliance and substantive tests 6. Determine the effect of the SAS No. 39 requirements on the audit process. There were 1,988 public accounting firms surveyed. This population was divided into four strata: largest firms, other large firms. Division firms, and other small firms. Data analysis included descriptive statistics and t-tests. Survey results relative to each objective were: 1. Eight (fourteen) considerations and procedures performed in compliance (substantive) tests were determined to be major problems. 2. Frequencies of problems were significantly greater for the largest firms using a nonstatistical approach and the other small firms providing a high level of CPE to their audit staffs. 3. Frequency of problems in current audits decreased significantly for the largest and other small firms. 4. Within a stratum the firms' approaches to testing were fairly consistent across various categories of compliance and substantive tests. The least amount of audit sampling was reported by the other large firms. The amount of statistical sampling was about the same for all strata. 5. The frequencies of audit sampling in compliance and substantive tests were not significantly different. The frequency of statistical methods was significantly greater in compliance than substantive tests. 6. The firms changed their audit processes to incorporate the SAS No. 39 requirements by modifying their audit sampling definitions and approaches, increasing audit sampling, and increasing audit sampling documentation.Item An initial sample size selection procedure(Texas Tech University, 1969-08) Smith, Harris WilliamNot availableItem Approximations to the exact distribution of the Kruskal-Wallis test statistic for unequal sample sizes(Texas Tech University, 1976-12) Wynn, Terry DuaneNot availableItem Bayesian and pseudo-likelihood interval estimation for comparing two Poisson rate parameters using under-reported data.(2009-04-01T15:56:04Z) Greer, Brandi A.; Young, Dean M.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.We present interval estimation methods for comparing Poisson rate parameters from two independent populations with under-reported data for the rate difference and the rate ratio. In addition, we apply the Bayesian paradigm to derive credible intervals for both the ratio and the difference of the Poisson rates. We also construct pseudo-likelihood-based confidence intervals for the ratio of the rates. We begin by considering two cases for analyzing under-reported Poisson counts: inference when training data are available and inference when they are not. From these cases we derive two marginal posterior densities for the difference in Poisson rates and corresponding credible sets. First, we perform Monte Carlo simulation analyses to examine the effects of differing model parameters on the posterior density. Then we perform additional simulations to study the robustness of the posterior density to misspecified priors. In addition, we apply the new Bayesian credible intervals for the difference of Poisson rates to an example concerning the mortality rates due to acute lower respiratory infection in two age groups for children in the Upper River Division in Gambia and to an example comparing automobile accident injury rates for male and female drivers. We also use the Bayesian paradigm to derive two closed-form posterior densities and credible intervals for the Poisson rate ratio, again in the presence of training data and without it. We perform a series of Monte Carlo simulation studies to examine the properties of our new posterior densities for the Poisson rate ratio and apply our Bayesian credible intervals for the rate ratio to the same two examples mentioned above. Lastly, we derive three new pseudo-likelihood-based confidence intervals for the ratio of two Poisson rates using the double-sampling paradigm for under-reported data. Specifically, we derive profile likelihood-, integrated likelihood-, and approximate integrated likelihood-based intervals. We compare coverage properties and interval widths of the newly derived confidence intervals via a Monte Carlo simulation. Then we apply our newly derived confidence intervals to an example comparing cervical cancer rates.Item The impact of the inappropriate modeling of cross-classified data structures(2004) Meyers, Jason Leon; Beretvas, Susan NatashaItem Improving sampled microprocessor simulation(2005) Luo, Yue; John, Lizy KurianMicroprocessor evaluation using detailed cycle-accurate simulation is prohibitively time-consuming. Sampling is the most widely used simulation time reduction technique. In this dissertation, new sampling designs that utilize the characteristics of the workload, the microarchitecture being simulated, and the user’s specific objective are proposed. They improve accuracy, and reduce simulation time and storage cost. Statistical sampling theory is employed to study the choice of sampling unit size for simple random sampling with perfect warm-up. More importantly, the inherent characteristic of the benchmarks that affects the choice of sampling unit size is discerned. Previous research has been focusing on the accuracy of Cycle Per Instruction (CPI). However, most simulations are used to measure the speedup due to some microarchitectural enhancements. A new sampling scheme that employs ratio estimator from statistical theory is proposed to measure speedup and to quantify its error. In the experiment, 9X fewer instructions are simulated as compared to estimating CPI for the same relative error limit. This dissertation extends sampling techniques to the simulation of commercial workloads such as On-Line Transaction Processing (OLTP) used by banks, airlines, etc. The applicability of simple random sampling and representative sampling for OLTP workloads is investigated. A dynamic stopping rule is proposed for sampling OLTP workloads, which requires only one simulation and thus eliminates the second simulation in previous random sampling methods. In order to achieve accurate sampling results, microarchitectural structures must be adequately warmed up before each measurement. Previous warm-up techniques have not considered the cache configuration being simulated, an important factor on the warmup length. This dissertation presents a new cache warm-up technique for sampled microprocessor simulation, which allows the warm-up length to be adaptive to cache configurations and benchmark variability characteristics. As a result, warm-up length has been greatly reduced, especially for small caches, without losing accuracy. For trace-driven simulation, the sampled traces have to be stored. Another contribution of the dissertation is the Locality Based Trace Compression (LBTC) technique, which employs both spatial locality and temporal locality in program memory references. It efficiently compresses not only the address but also other attributes associated with each memory reference.Item Investigation of random sampling in flowshop sequencing(Texas Tech University, 1978-08) Charles, Oliver EkepreNot availableItem Optimum stratified sampling using prior information(Texas Tech University, 1988-08) Koti, Kallappa MThe stratified sample allocation problem using prior information concerning strata variances, is considered. Given k random variables Xi, X2, • • •, Xk on a probability space, a Borel measurable function X of Xi, X2, • • •, X^, called a maximal utility function, is defined. A rigorous derivation of its expected value is presented. The definition and expected value of X are repeatedly used to formulate the objective functions used to solve the stratified sample allocation problem. The resulting allocations are called minimax allocations. Assuming prior information in the form of a distribution function on strata variances, a noninformative design which happens to be an alternative to Aggarwal's (1958) allocation, is proposed. If prior information concerning strata coefficients of variation is available, a minimax sampling strategy based on Searis' (1964) work, is presented. Under a normal superpopulation model, assuming locally uniform prior distributions on strata means and variances, two-phase minimax allocations comparable with that of Draper et al. (1968) are developed. Several numerical examples are given to illustrate and compare minimax allocation procedure with other existing procedures.Item Qualitative and quantitative sequential sampling(2006) Rai, Rahul; Campbell, Matthew I.Sequential sampling refers to a set of design of experiment (DOE) method where the next sample point is determined by information from previous experiments. This dissertation introduces qualitative and quantitative sequential sampling (Q2S2) technique, in which optimization and user knowledge is used to guide the efficient choice of sample points. This method combines information from multiple fidelity sources including computer simulation models of the product, first principals involved in design and designer's qualitative intuitions about the design. Both quantitative and qualitative information from varying fidelty sources are merged together to arrive at new sampling strategy. This is accomplished by introducing the concept of confidence function, C, which is represented as a field that is a function of the decision variables, x, and the performance parameter, f. The advantages of the approach are demonstrated using various function example cases. The examples include design of a bi-stable Micro Electro Mechanical System (MEMS) relay, a complex and relevant mechanical system. In each case, the performance of Q2S2 is highly encouraging.Item Sample size determination for Emax model, equivalence / non-inferiority test and drug combination in fixed dose trials.(2008-06-11T12:13:34Z) Wang, Jie, 1977-; Stamey, James D.; Statistical Sciences.; Baylor University. Dept. of Statistical Sciences.Sample size determination is one of the most important aspects in clinical designs. Careful selection of appropriate sample sizes can not only save economic and human resources, but also improve model performance and efficiency. We first explore the sample sizes of Emax model for a simple one group crossover design. Emax model is the one of the most frequently used models defining the relationship of drug efficacy with respect to its dosing levels in pharmacokinetic /pharmacodynamic studies. In frequentist approach, sample sizes are determined by desired accuracy for the parameter of interest ED₅₀. Non-linear mixed effects model is applied to emphasize the within subjects correlation. To allow for different magnitudes of variability of the population parameters in the Emax model, we proposed three different model structures to account for the random effects. In Bayesian approach, sample sizes are determined by desired coverage, average of posterior variances and lengths for the parameter of interest ED₅₀. In our simulation studies, sampling priors are used to generate the data, and non-informative priors are utilized to represent ignorance of key model parameters. Sample sizes for comparative studies are then discussed in Bayesian approach. In the absence of gold standard, sample sizes are determined by the measures of average posterior variances and lengths for the ratio of marginal probabilities of two screening tests; whereas in the presence of gold standard, sample sizes are evaluated under the same criterion to the measures of sensitivity and specificity. Non-informative priors are utilized in this study as well. We have also considered the problem of drug combination in fixed dose trials to test whether a drug mixture, which may combine two or more agents, is more ‘effective’ than each of its components. Informative priors are derived for component drugs and a non-informative prior is assumed for the drug mixture. Sample sizes are evaluated by posterior standard errors, average probability of more effectiveness and Bayesian power.Item Statistics and sampling in the accounting profession(Texas Tech University, 1962-05) Harrell, Frederick NormanNot availableItem The application of statistical sampling techniques to the field of auditing(Texas Tech University, 1960-08) Stevens, Elmer GlennNot available