Standard

Assessment of computer and information literacy in ICILS 2013: Do different item types measure the same construct? / Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia.

In: European Educational Research Journal, Vol. 16, No. 6, 08.11.2017, p. 716-732.

Publication: Research - peer-reviewJournal articles

Harvard

Ihme, JM, Senkbeil, M, Goldhammer, F & Gerick, J 2017, 'Assessment of computer and information literacy in ICILS 2013: Do different item types measure the same construct?' European Educational Research Journal, vol 16, no. 6, pp. 716-732. DOI: 10.1177/1474904117696095

APA

Ihme, J. M., Senkbeil, M., Goldhammer, F., & Gerick, J. (2017). Assessment of computer and information literacy in ICILS 2013: Do different item types measure the same construct? European Educational Research Journal, 16(6), 716-732. DOI: 10.1177/1474904117696095

Vancouver

Ihme JM, Senkbeil M, Goldhammer F, Gerick J. Assessment of computer and information literacy in ICILS 2013: Do different item types measure the same construct? European Educational Research Journal. 2017 Nov 8;16(6):716-732. Available from, DOI: 10.1177/1474904117696095

BibTeX

@article{b99490dd036b403ab7bc122ebed03aed,
title = "Assessment of computer and information literacy in ICILS 2013: Do different item types measure the same construct?",
abstract = "The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and information literacy in order to balance technological and information-related aspects of computer and information literacy. The item types differ in the cognitive processes and the type of knowledge they measure and in the strands and aspects of the ICILS 2013 framework they address. In this article, we explored which factor models that assume item type factors or type of knowledge factors fit the data. For the factors of the best fitting models, regression analyses on SES, frequency of computer use, self-efficacy, and gender were computed to work out the different meanings and the convergent and discriminant validity of the factors. The results show that three-dimensional models with correlated factors for item type or type of knowledge fit best. Regression analyses discover substantive implications of between-item and within-item models. The effects are discussed and an outlook is given.",
author = "Ihme, {Jan Marten} and Martin Senkbeil and Frank Goldhammer and Julia Gerick",
year = "2017",
month = "11",
doi = "10.1177/1474904117696095",
volume = "16",
pages = "716--732",
journal = "European Educational Research Journal",
number = "6",

}

RIS

TY - JOUR

T1 - Assessment of computer and information literacy in ICILS 2013: Do different item types measure the same construct?

AU - Ihme,Jan Marten

AU - Senkbeil,Martin

AU - Goldhammer,Frank

AU - Gerick,Julia

PY - 2017/11/8

Y1 - 2017/11/8

N2 - The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and information literacy in order to balance technological and information-related aspects of computer and information literacy. The item types differ in the cognitive processes and the type of knowledge they measure and in the strands and aspects of the ICILS 2013 framework they address. In this article, we explored which factor models that assume item type factors or type of knowledge factors fit the data. For the factors of the best fitting models, regression analyses on SES, frequency of computer use, self-efficacy, and gender were computed to work out the different meanings and the convergent and discriminant validity of the factors. The results show that three-dimensional models with correlated factors for item type or type of knowledge fit best. Regression analyses discover substantive implications of between-item and within-item models. The effects are discussed and an outlook is given.

AB - The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and information literacy in order to balance technological and information-related aspects of computer and information literacy. The item types differ in the cognitive processes and the type of knowledge they measure and in the strands and aspects of the ICILS 2013 framework they address. In this article, we explored which factor models that assume item type factors or type of knowledge factors fit the data. For the factors of the best fitting models, regression analyses on SES, frequency of computer use, self-efficacy, and gender were computed to work out the different meanings and the convergent and discriminant validity of the factors. The results show that three-dimensional models with correlated factors for item type or type of knowledge fit best. Regression analyses discover substantive implications of between-item and within-item models. The effects are discussed and an outlook is given.

U2 - 10.1177/1474904117696095

DO - 10.1177/1474904117696095

M3 - Journal articles

VL - 16

SP - 716

EP - 732

JO - European Educational Research Journal

T2 - European Educational Research Journal

JF - European Educational Research Journal

IS - 6

ER -

ID: 855305