Suche

Wo soll gesucht werden?
Erweiterte Literatursuche

Ariadne Pfad:

Inhalt

Literaturnachweis - Detailanzeige

 
Autor/inn/enHild, Pitt; Gut, Christoph; Brückmann, Maja
TitelValidating Performance Assessments: Measures That May Help to Evaluate Students' Expertise in 'Doing Science'
QuelleIn: Research in Science & Technological Education, 37 (2019) 4, S.419-445 (27 Seiten)Infoseite zur Zeitschrift
PDF als Volltext Verfügbarkeit 
ZusatzinformationORCID (Hild, Pitt)
ORCID (Brückmann, Maja)
Spracheenglisch
Dokumenttypgedruckt; online; Zeitschriftenaufsatz
ISSN0263-5143
DOI10.1080/02635143.2018.1552851
SchlagwörterPerformance Based Assessment; Science Tests; Science Process Skills; Test Validity; Expertise; Student Evaluation; Secondary School Students; Grade 7; Grade 8; Grade 9; Foreign Countries; Scores; Switzerland
AbstractBackground: Several different measures have been proposed to solve persistent validity problems, such as high task-sampling variability, in the assessment of students' expertise in 'doing science'. Such measures include working with a-priori progression models, using standardised item shells and rating manuals, augmenting the number of tasks per student and comparing different measurement methods.Purpose: The impact of these measures on instrument validity is examined here under three different aspects: structural validity, generalisability and external validity.Sample: Performance assessments were administered to 418 students (187 girls, ages 12-16) in grades 7, 8 and 9 in the 2 lowest school performance tracks in (lower) secondary school in the Swiss canton of Zurich.Design and methods: Students worked with printed test sheets on which they were asked to report the outcomes of their investigations. In addition to the written protocols, direct observations and interviews were used as measurement methods. Evidence of the instruments' validity was reported by using different reliability and generalisability coefficients and by comparing our results to those found in literature.Results: An a-priori progression model was successfully used to improve the instrument's structural validity. The use of a standardised item shell and rating manual ensured reliable rating of the written protocols (0.79 = p0 = 0.98; 0.56 = k = 0.97). Augmenting the number of tasks per student did not solve the challenge of reducing task-sampling variability. The observed performance differed from the performance assessed via the written protocols.Conclusions: Students' performance in doing science can be reliably assessed with instruments that show good generalisability coefficients (?[superscript 2] = 0.72 in this case). Even after implementing the different measures, task-sampling variability remains high [mathematical equation = 47.2%]. More elaborate studies that focus on the substantive aspect of validity must be conducted to understand why students' expertise as shown in written protocols differs so markedly from their observed performance. (As Provided).
AnmerkungenRoutledge. Available from: Taylor & Francis, Ltd. 530 Walnut Street Suite 850, Philadelphia, PA 19106. Tel: 800-354-1420; Tel: 215-625-8900; Fax: 215-207-0050; Web site: http://www.tandf.co.uk/journals
Erfasst vonERIC (Education Resources Information Center), Washington, DC
Update2020/1/01
Literaturbeschaffung und Bestandsnachweise in Bibliotheken prüfen
 

Standortunabhängige Dienste
Bibliotheken, die die Zeitschrift "Research in Science & Technological Education" besitzen:
Link zur Zeitschriftendatenbank (ZDB)

Artikellieferdienst der deutschen Bibliotheken (subito):
Übernahme der Daten in das subito-Bestellformular

Tipps zum Auffinden elektronischer Volltexte im Video-Tutorial

Trefferlisten Einstellungen

Permalink als QR-Code

Permalink als QR-Code

Inhalt auf sozialen Plattformen teilen (nur vorhanden, wenn Javascript eingeschaltet ist)

Teile diese Seite: