Utilizing implementation data to explain outcomes within a theory-driven evaluation model

Date

2007-12

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This study examined the moderating effects of teachers' implementation of a research-based comprehension intervention on a related student outcome. In addition to looking at the utility of including implementation data in a model of student outcomes, the stability of implementation ratings across occasions and the relationship between two implementation data sources (teacher logs and researcher ratings) were examined. The program featured in the study consisted of research-based comprehension strategy instruction implemented in 4th grade classrooms during social studies. Two measures of implementation -- fidelity and overall instructional quality -- did not predict student outcomes. In the tested model, a student's comprehension skills upon entering 4th grade did more to predict post-intervention comprehension achievement than did the teacher's instructional practices. Secondary analyses showed that an overall measure of teacher quality appears to be relatively reliable across only a few measurement occasions. Fidelity scores were less stable across occasions. The alternative method of collecting implementation data used in this study (audio recordings) appears to offer a viable and less costly means of obtaining implementation data. In addition, when measured at a macro level, implementation fidelity data from two sources (teacher logs and researcher ratings) were moderately correlated. Results inform future theory-driven evaluation activities by providing information on approaching the task of documenting implementation and using that information to understand program outcomes.

Description

Keywords

Citation