Browsing by Subject "Measurement Invariance"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Detecting the Violation of Factorial Invariance with an Unknown Reference Variable(2014-08-04) Jung, EunjuA widely used tool for testing measurement invariance is multi-group confirmatory factor analysis (MCFA). Identification of MCFA models is usually done by imposing invariance constraints on parameters of chosen reference variables (RV). If the chosen RVs were not actually invariant, one could draw invalid conclusions regarding the source of noninvariance. How can an invariant RV be selected accurately? To our knowledge, no method is yet available, yet two approaches have been suggested to detect non-invariant (or invariant) items without choosing specific RVs. One is the factor-ratio test (FR-T), and the other is the use of the largest modification index (Max-Mod). These two approaches have yet to be directly compared under the same conditions. To address unsolved problems in partial measurement invariance testing, two studies were conducted. The first aimed to identify a truly invariant RV using the smallest modification index. The second aimed to directly compare the performances of FR-T and the backward approach using the Max-Mod in correctly specifying the source of noninvariance. The second study also proposes a new method?the forward approach facilitated by the bias-corrected bootstrapping confidence intervals. The performances of the three methods was compared in terms of perfect recovery rates, model-level Type I error rates, and model-level Type II error rates. The results of the first study indicated that the Min-Mod successfully identify a truly invariant RV across all conditions. In the second study, overall, the backward approach also showed best performance under 99% confidence level (? = 0.01) in both partial metric invariance (PMI) and partial scalar invariance (PSI) conditions. The performance of the forward approach was comparable with that of the backward approach only in PMI conditions. The factor-ratio test had the poorest performance. Limitations and future directions are also discussed.Item Impact of Not Fully Addressing Cross-Classified Multilevel Structure in Testing Measurement Invariance and Conducting Multilevel Mixture Modeling within Structural Equation Modeling Framework(2014-07-25) Im, MyungIn educational settings, researchers are likely to encounter multilevel data without strictly nested or hierarchical but cross-classified multilevel structure. However, due to the lack of familiarity and limitations of statistical software with cross-classified model, most substantive researchers adopt then the less optimal approaches to analyze cross-classified multilevel data. Two separate Monte Carlo studies were conducted to evaluate the impacts of misspecifying cross-classified structure data as hierarchical structure data in two different analytical settings under the structural equation modeling (SEM) framework. Study 1 evaluated the performance of conventional multilevel confirmatory factor analysis (MCFA) which assumes hierarchical multilevel data in testing measurement invariance, especially when the noninvariance exists at the between-level groups. We considered two design factors, intra-class correlation (ICC) and magnitude of factor loading differences. This simulation study showed low empirical power in detecting noninvariance under low ICC conditions. Furthermore, the low power was plausibly related to the underestimated ICC and the underestimated factor loading differences due to the redistribution of the variance component from the crossed factor ignored in the analysis. Study 2 examined the performance of conventional multilevel mixture models (MMMs), which assume hierarchical multilevel data, on the classification accuracy of class enumeration and individuals? class assignment when the latent class variable is at the between (cluster)-level. We considered a set of study conditions, including cluster size, degree of partial cross-classification, and mixing proportion of subgroups. From the results of the study, ignoring a crossed factor caused overestimation of the variance component of the remaining crossed factor at the between-level which was redistributed from the ignored crossed factor in the analysis. Moreover, no SEM statistical program can conduct MMM and take into account of the cross-classified data structure simultaneously. Hence, a researcher should acknowledge this limitation and be cautioned when conventional MMM is utilized with cross-classified multilevel data given the inflated variance component associated with the remaining crossed factor. Implications of the findings and limitations for each study are discussed.