Literaturnachweis - Detailanzeige
Autor/inn/en | Han, Chao; Lu, Xiaolei |
---|---|
Titel | Can Automated Machine Translation Evaluation Metrics Be Used to Assess Students' Interpretation in the Language Learning Classroom? |
Quelle | In: Computer Assisted Language Learning, 36 (2023) 5-6, S.1064-1087 (24 Seiten)
PDF als Volltext |
Zusatzinformation | ORCID (Han, Chao) ORCID (Lu, Xiaolei) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 0958-8221 |
DOI | 10.1080/09588221.2021.1968915 |
Schlagwörter | Translation; Computational Linguistics; Correlation; Language Processing; Second Languages; Language Usage; Cultural Awareness; Intercultural Communication; Evaluation Methods; Computer Software; Student Evaluation; Artificial Intelligence; Scoring; Evaluators; Second Language Learning; Second Language Instruction; Bilingualism; Chinese; English (Second Language); Foreign Countries; Undergraduate Students; Majors (Students); Language Tests; China Linguistics; Computerlinguistik; Korrelation; Sprachverarbeitung; Second language; Zweitsprache; Sprachgebrauch; Cultural identity; Kulturelle Identität; Interkulturelle Kommunikation; Schulnote; Studentische Bewertung; Künstliche Intelligenz; Bewertung; Zweitsprachenerwerb; Fremdsprachenunterricht; Bilingualismus; China; Chinesen; English as second language; English; Second Language; Englisch als Zweitsprache; Ausland; Language test; Sprachtest |
Abstract | The use of translation and interpreting (T&I) in the language learning classroom is commonplace, serving various pedagogical and assessment purposes. Previous utilization of T&I exercises is driven largely by their potential to enhance language learning, whereas the latest trend has begun to underscore T&I as a crucial skill to be acquired as part of transcultural competence for language learners and future language users. Despite their growing popularity and utility in the language learning classroom, assessing T&I is time-consuming, labor-intensive and cognitively taxing for human raters (e.g., language teachers), primarily because T&I assessment entails meticulous evaluation of informational equivalence between the source-language message and target-language renditions. One possible solution is to rely on automated quality metrics that are originally developed to evaluate machine translation (MT). In the current study, we investigated the viability of using four automated MT evaluation metrics, BLEU, NIST, METEOR and TER, to assess human interpretation. Essentially, we correlated the automated metric scores with the human-assigned scores (i.e., the criterion measure) from multiple assessment scenarios to examine the degree of "machine-human parity." Overall, we observed fairly strong metric-human correlations for BLEU (Pearson's r = 0.670), NIST (r = 0.673) and METEOR (r = 0.882), especially when the metric computation was conducted on the sentence level rather than the text level. We discussed these emerging findings and others in relation to the feasibility of operationalizing MT metrics to evaluate students' interpretation in the language learning classroom. (As Provided). |
Anmerkungen | Routledge. Available from: Taylor & Francis, Ltd. 530 Walnut Street Suite 850, Philadelphia, PA 19106. Tel: 800-354-1420; Tel: 215-625-8900; Fax: 215-207-0050; Web site: http://www.tandf.co.uk/journals |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |