The performance of cross-validation indices used to select among competing covariance structure models

dc.contributor.advisorStapleton, Laura M.en
dc.creatorWhittaker, Tiffany Annen
dc.date.accessioned2008-08-28T21:44:35Zen
dc.date.available2008-08-28T21:44:35Zen
dc.date.issued2003en
dc.descriptiontexten
dc.description.abstractWhen testing structural equation models, researchers attempt to establish a model that will generalize to other samples from the same population. Unfortunately, researchers tend to test and respecify models during this attempt, capitalizing on the characteristics inherent within the sample data in which the model is being developed. Several measures of model fit exist to aid researchers when trying to select a model that fits the sample data well. However, these measures fail to consider the predictive validity of a model, or how well it will generalize to other samples from the same population. In 1983, Cudeck and Browne proposed using cross-validation as a model selection technique. They recommended cross-validating several plausible models and selecting the model with the most predictive validity. Several cross-validation indices have been proposed in the past twenty years, including the single-sample AIC, CAIC, and BCI; the multiple-sample C; the two-sample CVI; and the “pseudo” single-sample C* and C *. Previous studies have investigated the performance of these various indices, but have been limited with respect to their study design characteristics. The purpose of this study is to extend the literature in this area by examining the performance of the previously mentioned cross-validation indices under additional study design characteristics, such as nonnormality and crossvalidation design. Factor loading, sample size, and model misspecification conditions were also manipulated. The performance of each cross-validation index was measured in terms of how many times out of 1,000 replications they selected the correct confirmatory factor model. The results indicated that the performance of the cross-validation indices tended to improve as factor loading and sample size increased. The double cross-validated indices outperformed their simple cross-validated counterparts in certain conditions. The performance of the cross-validation indices tended to decrease as nonnormality increased. Recommendations are provided as to which cross-validation methods would optimally perform in a given condition. It is hoped that this study provides researchers with useful information concerning the use of cross-validation as a model selection technique and that researchers will begin to focus on the predictive validity of a structural equation model in addition to overall model fit.
dc.description.departmentEducational Psychologyen
dc.format.mediumelectronicen
dc.identifierb57346033en
dc.identifier.oclc57141268en
dc.identifier.proqst3116229en
dc.identifier.urihttp://hdl.handle.net/2152/1058en
dc.language.isoengen
dc.rightsCopyright is held by the author. Presentation of this material on the Libraries' web site by University Libraries, The University of Texas at Austin was made possible under a limited license grant from the author who has retained all copyrights in the works.en
dc.subject.lcshSocial sciences--Statistical methods--Mathematical modelsen
dc.subject.lcshAnalysis of variance--Mathematical modelsen
dc.titleThe performance of cross-validation indices used to select among competing covariance structure modelsen
dc.type.genreThesisen

Files