Web-based training evaluation in the workplace: practices, instructional architectures, and skills
Rodgers, Ida L.
MetadataShow full item record
On August 19, 2005, I successfully defended my doctoral dissertation with my committee, which included Dr. Locke Carter (Chair), Dr. Sam Dragga (Department Chair), and Dr. Craig Baehr. In addition to my dissertation committee, Dr. David Roach from Communications Studies represented the university. The research for the dissertation, "Web-based training evaluation in the workplace: Practices, instructional architectures, and skills," examined WBT evaluation (WBTE) in workplaces. The problem, to which this research responds, is the scarcity of information about WBTE in the field of technical communication. Thus, practitioners may lack expertise that would make them valuable to WBT development teams and their companies. In addition, academic program developers may lack courses or course components suiting the needs of students taking jobs with companies that create WBT. The problem stems from three factors: (a) training has mostly shifted from the classroom to the Web, (b) technical communicators work on Web-based training projects, and (c) evaluation is a necessary component of WBT projects. The problem affects both practitioners and academics because the field does not identify itself as WBT developers and evaluation experts. The study's multiple methods included a workplace site visit, an expert panel for validity review of survey items, a usability test, an online survey, a second expert panel validity review of results, and researcher reflections to identify results that triangulate. Another feature of this study is that it crossed many disciplinary boundaries, which presented challenges to validity. The challenges to validity became a methods thread in the study as I sought to build a case for validity. Results of the study include information valuable to technical communication practitioners and academics responsible for program development. Some results show that my participants are highly educated, come from widely varied fields, work on teams of three to five people, often perform the team role of project manager, and employ a wide variety of formative, summative, and reflective evaluation measures. Results of the instructional architecture methods used were less clear except to illustrate the technical communication maxim that form and content depend on context, audience, and purpose. The study results may affect practitioner self-study, program development, and research methods in our field because the results illustrate the desirability of expanding our field's definition of itself to include WBT developers and evaluation experts. The study, in addition to collecting data, represents a model of the three evaluation stages: formative, summative, and reflective (a term I adopted from the field of composition that applies to some evaluation and research methods). The study concludes with three practical products. One product includes suggestions for both practitioners and academic program developers for crossing disciplinary boundaries to achieve this expansion. Another product is a list of online types of training or education and suggestions for evaluation methods that apply to each type. The third product is a research methods model that includes formative, summative, and reflective practices. This study revealed many additional areas for research. These include examination of evaluation methods appropriate and useful for various types of online education particularly in the field of technical communication, of WBT evaluation measures of results and how these impact public discourse, and decision-making in the WBT and online educational development processes including rhetorical, ethical, and methodical considerations.