Literaturnachweis - Detailanzeige
Autor/inn/en | Latifi, Syed; Gierl, Mark |
---|---|
Titel | Automated scoring of junior and senior high essays using Coh-Metrix features: Implications for large-scale language testing. |
Quelle | In: Language testing, 38 (2021) 1, S. 62-85Infoseite zur Zeitschrift
PDF als Volltext (1); PDF als Volltext (2) |
Beigaben | Anmerkungen 1; Literaturangaben; Tabellen 8 |
Sprache | englisch |
Dokumenttyp | online; gedruckt; Zeitschriftenaufsatz |
ISSN | 0265-5322; 1477-0946 |
Schlagwörter | Empirische Forschung; Test; Essay; Vergleichende Analyse; Bewertung |
Abstract | An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, the authors aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from their large-scale assessments. Specifically, they studied nine categories of Coh-Metrix features for developing prompt-specific AES scoring models for our sample. The authors developed the models by capitalizing on the nine features' informativeness as a function of dimensionality reduction. They used a three-staged scoring framework. The machine scores were validated against a "gold standard" of ratings, that is, those assigned by two human raters. The nine language features reliably captured the construct of the students' writing quality. They authors performed a secondary analysis to see how the scoring models performed in relation to other, already established AES systems, and there was no systematic pattern of scoring discrepancy. However, for essays with widely divergent human ratings, the scoring models were disadvantaged owing to the inherent unreliability of the human scores. (Verlag, adapt.). |
Erfasst von | Informationszentrum für Fremdsprachenforschung, Marburg |
Update | 2022/2 |