Comparison of value-added models for school ranking and classification: a Monte Carlo study
Abstract
A ?Value-Added? definition of school effectiveness calls for the evaluation of schools based on the unique contribution of schools to individual student academic growth. The estimates of value-added school effectiveness are usually used for ranking and classifying schools. The current simulation study examined and compared the validity of school effectiveness estimates in four statistical models for school ranking and classification. The simulation study was conducted under two sample size conditions and the situations typical in school effectiveness research. The Conditional Cross-Classified Model (CCCM) was used to simulate data. The findings indicated that the gain score model adjusting for students? test scores at the end of kindergarten (i. e., prior entering to an elementary school) (Gain_kindergarten) could validly rank and classify schools. Other models, including the gain score model adjusting for students? test scores at the end of Grade 4 (i. e., one year before estimating the school effectiveness in Grade 5) (Gain_grade4), the Unconditional Cross-Classified Model (UCCM), and the Layered Mixed Effect Model (LMEM), could not validly rank or classify schools. The failure of the UCCM model in school ranking and classification indicated that ignoring covariates would distort school rankings and classifications if no other analytical remedies were applied. The failure of the LMEM model in school ranking and classification indicated that estimation of correlations among repeated measures could not alleviate the damage caused by the omitted covariates. The failure of the Gain_grade4 model cautioned against adjustment using the test scores of the previous year. The success of the Gain_kindergarten model indicated that under some circumstances, it was possible to achieve valid school rankings and classifications with only two time points of data.