Literaturnachweis - Detailanzeige
Autor/in | Olney, Andrew M. |
---|---|
Titel | Generating Multiple Choice Questions from a Textbook: LLMs Match Human Performance on Most Metrics |
Quelle | (2023), (19 Seiten) |
Zusatzinformation | ORCID (Olney, Andrew M.) Weitere Informationen |
Sprache | englisch |
Dokumenttyp | gedruckt; Monographie |
Schlagwörter | Test Construction; Multiple Choice Tests; Test Items; Algorithms; Natural Language Processing; Models; Artificial Intelligence; Textbooks; College Science; Science Tests; Anatomy; Physiology |
Abstract | Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled human evaluation of three conditions: a fine-tuned, augmented version of Macaw, instruction-tuned Bing Chat with zero-shot prompting, and human-authored questions from a college science textbook. Our results indicate that on six of seven measures tested, both LLM's performance was not significantly different from human performance. Analysis of LLM errors further suggests that Macaw and Bing Chat have different failure modes for this task: Macaw tends to repeat answer options whereas Bing Chat tends to not include the specified answer in the answer options. For Macaw, removing error items from analysis results in performance on par with humans for all metrics; for Bing Chat, removing error items improves performance but does not reach human-level performance. [This paper was published in the "CEUR Workshop Proceedings," 2023.] (As Provided). |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |