An examination of comprehensibility in a high stakes oral proficiency assessment for prospective international teaching assistants

dc.contributor.advisorSchallert, Diane L.en
dc.creatorMcGregor, Lin Alison, 1970-en
dc.date.accessioned2012-06-12T17:21:21Zen
dc.date.accessioned2017-05-11T22:25:05Z
dc.date.available2012-06-12T17:21:21Zen
dc.date.available2017-05-11T22:25:05Z
dc.date.issued2007-08en
dc.descriptiontexten
dc.description.abstractThis study investigated the construct of comprehensible English in the context of oral proficiency assessment for international teaching assistants. I carried out a three-part mixed method design to explore instructor rater judgments, results of a speech analysis, and how specific speech variables might have influenced judgments on the assessment criteria. Each step focused on a failed/passed assessment comparison made possible through archived data from which 10 individuals initially failed the oral proficiency test but within the same year retook the task and received a passing score. Part A evaluated the perspective of the instructor raters through the rating scale judgments provided on the assessment evaluation forms. In the second part of the study, I coded and scored grammatical, temporal, and phonological variables that occurred on two-minute excerpts of a field-specific summary task from the set of 10 failed and then subsequently passed assessments performed by the same individuals. I inspected the speech analysis results to evaluate differences in the values of specific speech variables on the set of failed performances in comparison to the set of passed performances. In Part C, I conducted 10 case studies to compare each individual's rating scale judgments and rater comments on grammar, fluency, and pronunciation from their failed and their passed assessment with the results from the speech analysis of grammatical, temporal, and phonological variables. The case study approach facilitated a broad inspection of the interrelation among the rater perspectives on the assessment criteria and the speech analysis results. The study findings showed evidence of an interrelation between temporal and phonological variables on rater judgments of comprehensibility, as well as the role of pronunciation as a criterion for oral proficiency assessments. I concluded with implications for future research on the interrelation among speech variables that influence listener perceptions of comprehensibility and the use of pronunciation as a speaking assessment criterion.en
dc.description.departmentEducational Psychologyen
dc.format.mediumelectronicen
dc.identifier.urihttp://hdl.handle.net/2152/15862en
dc.language.isoengen
dc.rightsCopyright is held by the author. Presentation of this material on the Libraries' web site by University Libraries, The University of Texas at Austin was made possible under a limited license grant from the author who has retained all copyrights in the works.en
dc.subjectInternational teaching assistantsen
dc.subject.lcshEnglish language--Spoken English--United States--Examinations--Case studiesen
dc.subject.lcshGraduate teaching assistants--Rating of--United States--Case studiesen
dc.subject.lcshSecond language acquisition--Case studiesen
dc.titleAn examination of comprehensibility in a high stakes oral proficiency assessment for prospective international teaching assistantsen

Files