Meta-regressions under a mixed effects model (Borenstein et al. 2009) were conducted for the moderator analyses using the metafor and matrix packages in R and R studio. This analysis uses a Q test as an omnibus test to estimate the heterogeneity among studies explained by the moderator, as well as residual heterogeneity (i.e., whether unexplained between-study variance remained). If a moderator level had a sample size smaller than two, it was excluded from the analysis. For categorical moderators, post hoc comparisons under a random effects model were calculated with the MAd package if the omnibus test indicated significance.
Moderator analyses for the subsplit for cognitive learning outcomes could only be performed for social interaction, learning arrangement of the comparison group, and period of time because the subgroups were too small for several other moderator levels (see Table 5). In line with the moderator analysis described above (see Table 2), social interaction, learning arrangement of the comparison group, and period of time did not significantly moderate the effects of gamification on cognitive learning outcomes in the subsplit analysis.
No Stable Too Small download pdf
The aim of this meta-analysis was to statistically synthesize the current state of research on the effects of gamification on cognitive, motivational, and behavioral learning outcomes, taking into account potential moderating factors. Overall, the results indicated significant, small positive effects of gamification on cognitive, motivational, and behavioral learning outcomes. These findings provide evidence that gamification benefits learning, and they are in line with the theory of gamified learning (Landers 2014) and self-determination theory (Ryan and Deci 2002). In addition, the results of the summary effect analyses were similar to results from meta-analyses conducted in the context of games (see Clark et al. 2016; Wouters et al. 2013), indicating that the power of games can be transferred to non-game contexts by using game design elements. Given that gamification research is an emerging field of study, the number of primary studies eligible for this meta-analysis was rather small, and effects found in this analysis were in danger of being unstable. Therefore, we investigated the stability of the summary effects. Fail-safe numbers as an estimate of the degree to which publication bias may exist in the samples were applied and indicated that the summary effects for cognitive, motivational, and behavioral outcomes were stable. However, subsplits exclusively including studies with high methodological rigor only supported the summary effect of gamification on cognitive learning outcomes. Although the analyses were underpowered, the summary effects for motivational and behavioral learning outcomes were not significant. Thus, summary effects of gamification on motivational and behavioral learning outcomes are not robust according to the subsplit analyses. For both motivational and behavioral learning outcomes, the subsplits indicate effects of gamification depend on the mode of social interaction. In a nutshell, gamification of motivational and behavioral learning outcomes can be effective when applied in competitive-collaborative settings in contrast to mere competitive settings.
For both cognitive and motivational learning outcomes, there was no significant difference in effect sizes between the inclusion and the exclusion of game fiction. The mechanisms that are supposed to be at work according to self-determination theory (Rigby and Ryan 2011) were not fully supported in this analysis. However, the results on cognitive and motivational learning outcomes were in line with meta-analytic evidence from the context of games, which found that including game fiction was not more effective than excluding it (Wouters et al. 2013). Nevertheless, for behavioral learning outcomes, the effects of including game fiction were significantly larger than the effects without game fiction. The fail-safe number for the effect of game fiction on behavioral learning outcomes indicated a stable effect that did not suffer from publication bias. However, studies including game fiction were less likely to use experimental designs, which can be a confounding factor.
Additionally, as gamification can be described as a design process in which game elements are added, the nonsignificant result of this moderator for cognitive and motivational learning outcomes could be explained by the quality with which the (game) design methods were applied: Most learning designers who apply and investigate gamification in the context of learning are not trained as writers and are probably, on average, not successful at applying game fiction effectively. Further, the findings could also be affected by how the moderator was coded. For example, the effectiveness of gamification might be affected by whether game fiction was only used at the beginning of an invervention to provide initial motivation, but it might not be relevant afterwards (e.g., avatars that cannot be developed), or they might continue to be relevant throughout the intervention (e.g., meaningful stories that continue). These possible qualitative differences in the design and use of game fiction could have contributed to the mixed results found in the present analysis. Due to the small sample size, a more fine-grained coding was not possible because it would have led to subgroups that were too small to be used to conduct any conclusive comparisons. Further, subsplit analyses regarding the moderator inclusion of game fiction were not possible for behavioral learning outcomes because the subgroups were too small.
Results of the moderator learning arrangement of the comparison group did not show significant differences between gamification when compared with different types of instruction (i.e., passive, active, or mixed instruction) in the comparison group for cognitive and motivational learning outcomes. For behavioral learning outcomes, the analysis was not possible because the moderator subgroups were too small. On the one hand, these results may indicate that gamification is not yet used in a way that focuses on fostering high-quality learning activities and, thus, does not take full advantage of the possibilities that gamification might have. As proposed by the theory of gamified learning, gamification can affect learning outcomes by enhancing activities that are relevant for learning and might thus create instructional affordances for learners to actively engage in cognitive processes with the learning material. Several primary studies included in this analysis could have failed to specifically provide affordances for high-quality learning activities.
One limitation of our study was that the sample size was rather small, especially for behavioral learning outcomes and all subsplit analyses. This limits the generalizability of the results and is also problematic for statistical power because, for random effects models, power depends on the total number of participants across all studies and on the number of primary studies (Borenstein et al. 2009). If there is substantial between-study variance, as in this meta-analysis, power is likely to be low. For this reason, nonsignificant results do not necessarily indicate the absence of an effect but could be explained by a lack of statistical power, especially if effects are rather small.
Another limitation concerns the quality of primary studies in the present analysis, a problem often described with the metaphor garbage in, garbage out (Borenstein et al. 2009). As mentioned earlier, gamification research suffers from a lack of methodological rigor, which, from the perspective of meta-analyses, can be addressed by either assessing methodological differences as moderators or excluding studies with insufficient methodological rigor. In this analysis, both approaches were applied: methodological factors were included as moderators, and subsplits involving studies that applied high methodological rigor were performed. For motivational and behavioral learning outcomes, the results showed that quasi-experimental studies found significant effects, whereas experimental study showed nonsignificant results, emphasizing the need for more rigorous primary study designs that allow alternative explanations for differences in learning outcomes between different conditions to be ruled out. The subsplit analyses showed that the summary effects for the motivational and behavioral outcomes were not robust. However, given the small sample sizes in the subgroup analyses, these findings were highly likely to be underpowered and should be viewed with caution.
Gamification in the context of learning has received increased attention and interest over the last decade for its hypothesized benefits on motivation and learning. However, some researchers doubt that effects of games can be transferred to non-game contexts (see Boulet 2012; Klabbers 2018). The present meta-analysis supports the claim that gamification of learning works because we found significant, positive effects of gamification on cognitive, motivational, and behavioral learning outcomes. Whereas the positive effect of gamification on cognitive learning outcomes can be interpreted as stable, results on motivational and behavioral learning outcomes have been shown to be less stable. Further, the substantial amount of heterogeneity identified in the subsamples could not be accounted for by several moderating factors investigated in this analysis, leaving partly unresolved the question of which factors contribute to successful gamification. More theory-guided empirical research is needed to work toward a comprehensive theoretical framework with clearly defined components that describes precise mechanisms by which gamification can affect specific learning processes and outcomes. Future research should therefore explore possible theoretical avenues in order to construct a comprehensive framework that can be empirically tested and refined. 2ff7e9595c
Comments