Feedback is one of the most important factors for successful learning. Contemporary computer-based learning and testing environments allow the implementation of automated feedback in a simple and efficientmanner. Previous meta-analyses suggest that different types of feedback are not equally effective. This heterogeneity might depend on learner and test characteristics as well as the assessed outcome measure. Here, we present a novel network meta-analysis approach that allowed us to compare (i.e., rank) different types of feedback regarding their effects on performance measures. Following an extensive literature search, we were able to use 163 effect sizes from 77 experimental studies to compare classical feedback variations such as Knowledge of Results (KR), Knowledge of Correct Response (KCR), Elaborated Feedback (EF), and Answer-Until-Correct (AUC) feedback, with each other and with a No Feedback (NoFB) control group. Our findings indicate that EF is most likely to be the most effective for lower-order (i.e., recall/recognition) and higher-order (i.e., transfer) learning outcomes compared with the other feedback variants. For KCR and AUC, we typically found small to large effect sizes on learning outcomes. KR was found to be less effective than the other feedback types on improving lower-order and higher-order learning outcomes. Several subgroup analyses are reported to identify moderating factors for the effectiveness of different feedback interventions for different learner characteristics (i.e., sample source, and prior knowledge level) and test characteristics (i.e., learning domain, and test format).
Original languageEnglish
JournalJournal of Educational Psychology
Issue number8
Pages (from-to)1743-1772
Number of pages30
Publication statusPublished - 11.2022
No renderer: handleNetPortal,dk.atira.pure.api.shared.model.researchoutput.ContributionToJournal

    Research areas

  • computer-based tests, feedback, network meta-analysis, performance

ID: 2059981