Browsing by Subject "sample size"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item An Investigation of the Optimal Sample Size, Relationship between Existing Tests and Performance, and New Recommended Specifications for Flexible Base Courses in Texas(2013-04-22) Hewes, BaileyThe purpose of this study was to improve flexible base course performance within the state of Texas while reducing TxDOT?s testing burden. The focus of this study was to revise the current specification with the intent of providing a ?performance related? specification while optimizing sample sizes and testing frequencies based on material variability. A literature review yielded information on base course variability within and outside the state of Texas, and on what tests other states, and Canada, are currently using to characterize flexible base performance. A sampling and testing program was conducted at Texas A&M University to define current variability information, and to conduct performance related tests including resilient modulus and permanent deformation. In addition to these data being more current, they are more representative of short-term variability than data obtained from the literature. This ?short-term? variability is considered more realistic for what typically occurs during construction operations. A statistical sensitivity analysis (based on the 80th percentile standard deviation) of these data was conducted to determine minimum sample sizes for contractors to qualify for the proposed quality monitoring program (QMP). The required sample sizes for contractors to qualify for the QMP are 20 for gradation, compressive strength, and moisture-density tests, 15 for Atterberg Limits, and 10 for Web Ball Mill. These sample sizes are based on a minimum 25,000 ton stockpile, or ?lot?. After qualifying for the program, if contractors can prove their variability is better than the 80th percentile, they can reduce their testing frequencies. The sample size for TxDOT?s verification testing is 5 samples per lot and will remain at that number regardless of reduced variability. Once qualified for the QMP, a contractor may continue to send material to TxDOT projects until a failing sample disqualifies the contractor from the program. TxDOT does not currently require washed gradations for flexible base. Dry and washed sieve analyses were performed during this study to investigate the need for washed gradations. Statistical comparisons of these data yielded strong evidence that TxDOT should always use a washed method. Significant differences between the washed and dry method were determined for the percentage of material passing the No. 40 and No. 200 sieves. Since TxDOT already specifies limits on the fraction of material passing the No. 40 sieve, and since this study yielded evidence of that size fraction having a relationship with resilient modulus (performance), it would be beneficial to use a washed sieve analysis and therefore obtain a more accurate reading for that specification. Furthermore, it is suggested the TxDOT requires contractors to have ?target? test values, and to place 90 percent within limits (90PWL) bands around those target values to control material variability.Item Investigating the Effects of Sample Size, Model Misspecification, and Underreporting in Crash Data on Three Commonly Used Traffic Crash Severity Models(2011-08-08) Ye, FanNumerous studies have documented the application of crash severity models to explore the relationship between crash severity and its contributing factors. These studies have shown that a large amount of work was conducted on this topic and usually focused on different types of models. However, only a limited amount of research has compared the performance of different crash severity models. Additionally, three major issues related to the modeling process for crash severity analysis have not been sufficiently explored: sample size, model misspecification and underreporting in crash data. Therefore, in this research, three commonly used traffic crash severity models: multinomial logit model (MNL), ordered probit model (OP) and mixed logit model (ML) were studied in terms of the effects of sample size, model misspecification and underreporting in crash data, via a Monte-Carlo approach using simulated and observed crash data. The results of sample size effects on the three models are consistent with prior expectations in that small sample sizes significantly affect the development of crash severity models, no matter which model type is used. Furthermore, among the three models, the ML model was found to require the largest sample size, while the OP model required the lowest sample size. The sample size requirement for the MNL model is intermediate to the other two models. In addition, when the sample size is sufficient, the results of model misspecification analysis lead to the following suggestions: in order to decrease the bias and variability of estimated parameters, logit models should be selected over probit models. Meanwhile, it was suggested to select more general and flexible model such as those allowing randomness in the parameters, i.e., the ML model. Another important finding was that the analysis of the underreported data for the three models showed that none of the three models was immune to this underreporting issue. In order to minimize the bias and reduce the variability of the model, fatal crashes should be set as the baseline severity for the MNL and ML models while, for the OP models, the rank for the crash severity should be set from fatal to property-damage-only (PDO) in a descending order. Furthermore, when the full or partial information about the unreported rates for each severity level is known, treating crash data as outcome-based samples in model estimation, via the Weighted Exogenous Sample Maximum Likelihood Estimator (WESMLE), dramatically improve the estimation for all three models compared to the result produced from the Maximum Likelihood estimator (MLE).