Literaturnachweis - Detailanzeige
Autor/inn/en | Reed, Deborah K.; Mercer, Sterett H. |
---|---|
Titel | Potential Scoring and Predictive Bias in Interim and Summative Writing Assessments |
Quelle | In: School Psychology, 38 (2023) 4, S.215-224 (10 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Zusatzinformation | ORCID (Reed, Deborah K.) |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 2578-4218 |
DOI | 10.1037/spq0000527 |
Schlagwörter | Writing Tests; Summative Evaluation; Scoring; Bias; Prediction; Elementary Secondary Education; Gender Differences; English Language Learners; Low Income Students; Special Education; Student Characteristics; Hispanic American Students; Student Evaluation; Teacher Attitudes; Scoring Rubrics Writing test; Schreibtest; Bewertung; Vorhersage; Geschlechterkonflikt; Special needs education; Sonderpädagogik; Sonderschulwesen; Hispanic; Hispanic Americans; Student; Students; Hispanoamerikaner; Schüler; Schülerin; Studentin; Schulnote; Studentische Bewertung; Lehrerverhalten; Scoring formulas; Auswertungsbogen |
Abstract | Interim and summative assessments often are used to make decisions about student writing skills and needs for instruction, but the extent to which different raters and score types might introduce bias for some groups of students is largely unknown. To evaluate this possibility, we analyzed interim writing assessments and state summative test data for 2,621 students in Grades 3-11. Both teachers familiar with students and researchers unaware of students' identifying characteristics evaluated the interim assessments with analytic rubrics. Teachers assigned higher scores on the interim assessments than researchers. Female students had higher scores than males, and English learners (ELs), students eligible for free or reduced-price school lunch (FRL), and students eligible for special education (SPED) had lower scores than other students. These differences were smaller with researcher compared to teacher ratings. Across grade levels, interim assessment scores were similarly predictive of state rubric scores, scale scores, and proficiency designations across student groups. However, students identified as Hispanic, FRL, EL, or SPED had lower scale scores and a lower likelihood of reaching proficiency on the state exam. For this reason, these students' risk of unsuccessful performance on the state exam would be greater than predicted when based on interim assessment scores. These findings highlight the potential importance of masking student identities when evaluating writing to reduce scoring bias and suggest that the written composition portions of high-stakes writing examinations may be less biased against historically marginalized groups than the multiple choice portions of these exams. (As Provided). |
Anmerkungen | American Psychological Association. Journals Department, 750 First Street NE, Washington, DC 20002. Tel: 800-374-2721; Tel: 202-336-5510; Fax: 202-336-5502; e-mail: order@apa.org; Web site: http://www.apa.org |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2024/1/01 |