Performance decline in low-stakes educational assessments : Different mixture modeling approaches. / List, Marit Kristine; Robitzsch, Alexander; Lüdtke, Oliver et al.
In: Large-scale Assessments in Education, Vol. 5, No. 15, 2017.Research output: Contribution to journal › Journal article › Research › peer-review
}
TY - JOUR
T1 - Performance decline in low-stakes educational assessments
T2 - Different mixture modeling approaches
AU - List, Marit Kristine
AU - Robitzsch, Alexander
AU - Lüdtke, Oliver
AU - Köller, Olaf
AU - Nagy, Gabriel
PY - 2017
Y1 - 2017
N2 - BackgroundIn low-stakes educational assessments, test takers might show a performance decline (PD) on end-of-test items. PD is a concern in educational assessments, especially when groups of students are to be compared on the proficiency variable because item responses gathered in the groups could be differently affected by PD. In order to account for PD, mixture item response theory (IRT) models have been proposed in the literature.MethodsIn this article, multigroup extensions of three existing mixture models that assess PD are compared. The models were applied to the mathematics test in a large-scale study targeting school track differences in proficiency.ResultsDespite the differences in the specification of PD, all three models showed rather similar item parameter estimates that were, however, different from the estimates given by a standard two parameter IRT model. In addition, all models indicated that the amount of PD differed between tracks, in that school track differences in proficiency were slightly reduced when PD was accounted for. Nevertheless, the models gave different estimates of the proportion of students showing PD, and differed somewhat from each other in the adjustment of proficiency scores for PD.ConclusionsMultigroup mixture models can be used to study how PD interacts with proficiency and other variables to provide a better understanding of the mechanisms behind PD. Differences between the presented models with regard to their assumptions about the relationship between PD and item responses are discussed.
AB - BackgroundIn low-stakes educational assessments, test takers might show a performance decline (PD) on end-of-test items. PD is a concern in educational assessments, especially when groups of students are to be compared on the proficiency variable because item responses gathered in the groups could be differently affected by PD. In order to account for PD, mixture item response theory (IRT) models have been proposed in the literature.MethodsIn this article, multigroup extensions of three existing mixture models that assess PD are compared. The models were applied to the mathematics test in a large-scale study targeting school track differences in proficiency.ResultsDespite the differences in the specification of PD, all three models showed rather similar item parameter estimates that were, however, different from the estimates given by a standard two parameter IRT model. In addition, all models indicated that the amount of PD differed between tracks, in that school track differences in proficiency were slightly reduced when PD was accounted for. Nevertheless, the models gave different estimates of the proportion of students showing PD, and differed somewhat from each other in the adjustment of proficiency scores for PD.ConclusionsMultigroup mixture models can be used to study how PD interacts with proficiency and other variables to provide a better understanding of the mechanisms behind PD. Differences between the presented models with regard to their assumptions about the relationship between PD and item responses are discussed.
U2 - 10.1186/s40536-017-0049-3
DO - 10.1186/s40536-017-0049-3
M3 - Journal article
VL - 5
JO - Large-scale Assessments in Education
JF - Large-scale Assessments in Education
SN - 2196-0739
IS - 15
ER -
ID: 843265