Standard

Performance decline in low-stakes educational assessments : Different mixture modeling approaches. / List, Marit Kristine; Robitzsch, Alexander; Lüdtke, Oliver; Köller, Olaf; Nagy, Gabriel.

in: Large-scale Assessments in Education, Band 5, Nr. 15, 2017.

Publikation: Forschung - BegutachtungZeitschriftenaufsätze

Harvard

List, MK, Robitzsch, A, Lüdtke, O, Köller, O & Nagy, G 2017, 'Performance decline in low-stakes educational assessments: Different mixture modeling approaches' Large-scale Assessments in Education, Bd 5, Nr. 15. DOI: 10.1186/s40536-017-0049-3

APA

List, M. K., Robitzsch, A., Lüdtke, O., Köller, O., & Nagy, G. (2017). Performance decline in low-stakes educational assessments: Different mixture modeling approaches. Large-scale Assessments in Education, 5(15). DOI: 10.1186/s40536-017-0049-3

Vancouver

List MK, Robitzsch A, Lüdtke O, Köller O, Nagy G. Performance decline in low-stakes educational assessments: Different mixture modeling approaches. Large-scale Assessments in Education. 2017;5(15). Erhältlich von, DOI: 10.1186/s40536-017-0049-3

BibTeX

@article{1acf45ed313145c083ad9717d99cb92f,
title = "Performance decline in low-stakes educational assessments: Different mixture modeling approaches",
abstract = "BackgroundIn low-stakes educational assessments, test takers might show a performance decline (PD) on end-of-test items. PD is a concern in educational assessments, especially when groups of students are to be compared on the proficiency variable because item responses gathered in the groups could be differently affected by PD. In order to account for PD, mixture item response theory (IRT) models have been proposed in the literature.MethodsIn this article, multigroup extensions of three existing mixture models that assess PD are compared. The models were applied to the mathematics test in a large-scale study targeting school track differences in proficiency.ResultsDespite the differences in the specification of PD, all three models showed rather similar item parameter estimates that were, however, different from the estimates given by a standard two parameter IRT model. In addition, all models indicated that the amount of PD differed between tracks, in that school track differences in proficiency were slightly reduced when PD was accounted for. Nevertheless, the models gave different estimates of the proportion of students showing PD, and differed somewhat from each other in the adjustment of proficiency scores for PD.ConclusionsMultigroup mixture models can be used to study how PD interacts with proficiency and other variables to provide a better understanding of the mechanisms behind PD. Differences between the presented models with regard to their assumptions about the relationship between PD and item responses are discussed.",
author = "List, {Marit Kristine} and Alexander Robitzsch and Oliver Lüdtke and Olaf Köller and Gabriel Nagy",
year = "2017",
doi = "10.1186/s40536-017-0049-3",
volume = "5",
journal = "Large-scale Assessments in Education",
issn = "2196-0739",
publisher = "SpringerOpen",
number = "15",

}

RIS

TY - JOUR

T1 - Performance decline in low-stakes educational assessments

T2 - Large-scale Assessments in Education

AU - List,Marit Kristine

AU - Robitzsch,Alexander

AU - Lüdtke,Oliver

AU - Köller,Olaf

AU - Nagy,Gabriel

PY - 2017

Y1 - 2017

N2 - BackgroundIn low-stakes educational assessments, test takers might show a performance decline (PD) on end-of-test items. PD is a concern in educational assessments, especially when groups of students are to be compared on the proficiency variable because item responses gathered in the groups could be differently affected by PD. In order to account for PD, mixture item response theory (IRT) models have been proposed in the literature.MethodsIn this article, multigroup extensions of three existing mixture models that assess PD are compared. The models were applied to the mathematics test in a large-scale study targeting school track differences in proficiency.ResultsDespite the differences in the specification of PD, all three models showed rather similar item parameter estimates that were, however, different from the estimates given by a standard two parameter IRT model. In addition, all models indicated that the amount of PD differed between tracks, in that school track differences in proficiency were slightly reduced when PD was accounted for. Nevertheless, the models gave different estimates of the proportion of students showing PD, and differed somewhat from each other in the adjustment of proficiency scores for PD.ConclusionsMultigroup mixture models can be used to study how PD interacts with proficiency and other variables to provide a better understanding of the mechanisms behind PD. Differences between the presented models with regard to their assumptions about the relationship between PD and item responses are discussed.

AB - BackgroundIn low-stakes educational assessments, test takers might show a performance decline (PD) on end-of-test items. PD is a concern in educational assessments, especially when groups of students are to be compared on the proficiency variable because item responses gathered in the groups could be differently affected by PD. In order to account for PD, mixture item response theory (IRT) models have been proposed in the literature.MethodsIn this article, multigroup extensions of three existing mixture models that assess PD are compared. The models were applied to the mathematics test in a large-scale study targeting school track differences in proficiency.ResultsDespite the differences in the specification of PD, all three models showed rather similar item parameter estimates that were, however, different from the estimates given by a standard two parameter IRT model. In addition, all models indicated that the amount of PD differed between tracks, in that school track differences in proficiency were slightly reduced when PD was accounted for. Nevertheless, the models gave different estimates of the proportion of students showing PD, and differed somewhat from each other in the adjustment of proficiency scores for PD.ConclusionsMultigroup mixture models can be used to study how PD interacts with proficiency and other variables to provide a better understanding of the mechanisms behind PD. Differences between the presented models with regard to their assumptions about the relationship between PD and item responses are discussed.

U2 - 10.1186/s40536-017-0049-3

DO - 10.1186/s40536-017-0049-3

M3 - Journal articles

VL - 5

JO - Large-scale Assessments in Education

JF - Large-scale Assessments in Education

SN - 2196-0739

IS - 15

ER -

ID: 843265