Secciones
Referencias
Resumen
Servicios
Buscar
Fuente


Impact of gamified rubrics in teacher training
Impacto de las rúbricas gamiflcadas en la formación docente
Educación XX1, vol. 28, núm. 1, pp. 313-336, 2025
Universidad Nacional de Educación a Distancia

Estudios



Recepción: 10 Enero 2024

Aprobación: 09 Septiembre 2024

Publicación: 07 Enero 2025

DOI: https://doi.org/10.5944/educxx1.39457

Abstract: Formative assessment of learning requires methodological strategies that favor intrinsic student motivation. Gamification achieves a more active participation in the assessment of learning; however, there is controversy about the relevance of the competitive use, or not, of gamification in the classroom; therefore, nothing better than to train teachers in service with those methodologies that we want to introduce in class, especially with tools such as gamified rubrics. The research developed has a quasi-experimental design and analyzes the impact on learning when using gamified digital rubrics with «competitive badges» (competing with the group) vs. «non-competitive badges» (competing with oneself). The sample consists of 70 in-service teachers from a postgraduate degree program. To assess the effect of including gaming elements in assessment, the study compares tasks performed, rewards obtained and final grades between two groups. A validated survey was used to measure participants satisfaction with the teaching and assessment method. The results indicate that teachers are very satisfied and that final grades depend mainly on the number of rewards received during the course. There is an impact of the gamified rubrics on the final grade, being conditioned by the number of badges. It is concluded that we are facing a methodological strategy that needs to be analyzed beyond the results obtained in order to improve the change in teachers’ perception of gamification and other teaching innovations.

Keywords: formative evaluation, training of active teachers, educational technology, pedagogical innovation, teacher competencies.

Resumen: La evaluación formativa de los aprendizajes necesita plantear estrategias metodológicas que favorezcan la motivación intrínseca del estudiante. La gamificación consigue una participación más activa en la evaluación de los aprendizajes; sin embargo, existe una controversia sobre la pertinencia del uso competitivo, o no, de la gamificación en el aula; por lo que, nada mejor que formar a los docentes en servicio con aquellas metodologías que deseamos introducir en clase, especialmente con herramientas como las rúbricas gamificadas. La investigación desarrollada tiene un diseño cuasiexperimental y analiza el impacto en los aprendizajes cuando se utilizan rúbricas digitales gamificadas con «insignias competitivas» (compite con el grupo) vs. «insignias no competitivas» (compite con uno mismo). La muestra está constituida por 70 docentes en formación permanente de una titulación de postgrado. Para evaluar el efecto de incluir elementos de juego en la evaluación, el estudio compara las tareas realizadas, las recompensas obtenidas y las notas finales entre dos grupos. Se utilizó una encuesta validada para medir la satisfacción de los participantes con el método de enseñanza y evaluación. Los resultados indican que los docentes están muy satisfechos y que las calificaciones finales dependen principalmente del número de recompensas recibidas durante el curso. Hay un impacto de las rúbricas gamificadas en la nota final, estando condicionadas por el número de insignias. Se concluye que estamos ante una estrategia metodológica que requiere analizar más allá de los resultados obtenidos para mejorar el cambio de percepción en los docentes sobre la gamificación y otras innovaciones docentes.

Palabras clave: evaluación formativa, formación de docentes en activo, tecnología educativa, innovación pedagógica, competencias del docente.

INTRODUCTION

Competency-based assessment is a field that still requires considerable focus in in-service teacher training, not only in Spain, but also in other countries. Digital rubrics, in this sense, present significant benefits by serving as instruments to support teachers in this competence-based learning assessment process (Raposo-Rivas & Cebrián-de-la-Serna, 2019); however, they can also mean a methodological process by specifying and analysing assessment indicators and evidence with greater objectivity in a shared analysis among teachers. In this way, digital rubrics, which are currently widely used, can also be a relevant part of teacher training methodology, maintaining their polysemic nature as a technique, an instrument and a technology for formative assessment (Pérez-Torregrosa et al., 2022a; Cebrián de la Serna & Bergman, 2014). They are not exempt from emerging technologies, such as Big Data analysis and the use of artificial intelligence, among others, which promise both new opportunities and risks in their use in the future. Therefore, continuous training is required to effectively master the use of these technologies, techniques and tools by teachers.

Lifelong learning in digital competences is a necessity and a general agreement considering the changes and transformations that we are experiencing in all professions, and even more so with teachers, the main agents of change and those responsible for the training in digital competences of the citizens of the future. In this training we must aim to develop digital competences together with the creation of networks for the collaboration of professional knowledge (Ruiz-Rey et al., 2021). Within the extensive literature on teacher training and professional development, we can distinguish attempts to introduce innovations in the use of technologies in general (Hennessy, et al., 2022) and methodologies in particular such as gamification (Zainuddin et al., 2020; Franco-Mariscal et al., 2021), at the same time as other more specific technologies such as the use of digital video (Basgall et al., 2023), Photo-elicitation (García-Vera et al., 2020), as well as current social networks for the exchange of teaching experiences in a non-formal training model (Marcelo & Marcelo, 2021). This use is either as an end or as a means, analyzing its imponderable advantages, among which is the change in the role of teachers (Chacón et al., 2015), for training and use of the new functionalities that technologies make possible, among others, to store, share and analyze experiences and good practices.

Gamification methodologies have been very effective in verifying the acquisition of competences by students and, in turn, for them to be engaged (Barna & Fodor, 2017; Bouchrika et al., 2019). These results encourage the use of this methodology in the ongoing training of teachers, so that they can live the experiences satisfactorily, and in their commitment to competent learning, they can more easily acquire innovative models to introduce them in their classrooms. The training of education professionals in the use of technologies through gamification processes is an interesting initiative to bring active models and methodologies into play.

Moreover, gamification has been found to have positive effects on academic performance, student engagement and motivation (Manzano-León et al., 2021; Nair & Mathew, 2021; Murillo-Zamorano et al., 2021); as well as, most interestingly for our study, on initial and ongoing teacher training. Gamification facilitates trust in teachers to share and discuss in depth the use of different pedagogical strategies (Greaves & Vlachopoulos, 2023). Collaboration in the gamification process also brings interesting benefits through the interactivity between participants, and through the same peer-to-peer interaction that allows for formative evaluation (Marín & Pérez Garcias, 2016).

It will therefore be interesting to test the impact of this methodology to share problems and possible solutions to specific teaching issues, such as the assessment of learning and the role of technologies in this process. In initial teacher training it has been successfully implemented for the design of gamification exercises, with the results depending on the purpose of the learning objectives (Pozo-Sánchez et al., 2022). Another example of the creation of rubrics in initial teacher training is the work of Franco-Mariscal et al. (2021) to evaluate gamified teaching resources, where the impact of the reflections was analyzed through the analysis of the categories, finding notable changes during the design and preparation of the gamified resource and very small changes after implementation. The learning produced in the consensus sessions of the criteria, in the large group, favors a deeper reflection. As these studies are of interest in initial teacher training, it remains to be seen what possibilities they also offer for in-service training.

On the other hand, it has been shown that systems based on scores and badges attract participants’ commitment and attention to the skills at stake, generating intrinsic motivation (Xu et al., 2021), although they can also generate demotivation in those who cannot follow the programs. Therefore, their success will depend a lot on the purposes and learning objectives, as well as on the design of the tasks by the teacher (Pozo-Sánchez et al., 2022), from which it follows that we should design programs where gamification is more attenuated with more collaborative work, such as teamwork, or with non-competitive badges such as competing with oneself and not with the group. This would lead us to tasks with differentiated gamification methodologies, and to consider a variable that has been little studied, such as whether it is an evaluation of the competition against oneself (ipsative evaluation) or against the evaluation indicators and results of the group, depending on where the focus of attention is placed on the competition.

Seeking higher quality and more effective feedback ensures the path to meaningful learning, while being the essence of formative assessment.

Understanding how digital tools promote gamification in this feedback can have a greater impact on academic performance. This has been shown in Maraza-Quispe’s (2024) study, which concludes that gamification influences feedback, improving student learning and motivation through the support of technologies. However, it does not go into aspects of gamification that are perceived negatively, such as competitiveness among peers, a circumstance that should be considered and exposed to the analysis of the participants in a formative assessment methodology. In summary, gamification is a promising methodology for teacher education, for collegial teacher competence development, and for collaborative work under playful activities. It can enhance collaboration among teachers and foster their professional development, as it promotes collaborative feedback and the exchange of ideas among colleagues. Furthermore, methodologies based on digital rubrics, which are a more objective procedure for competence assessment (Fernández Medina et al., 2021), emerge as an optimal methodology for the exchange of good practices, constructive dialogue and discussion around assessment criteria. This makes it possible to explore the possibilities offered by technologies for the assessment of learning (Cebrián-de-la-Serna, 2018a). At the same time, there is an ongoing debate in the field of education that raises questions and doubts about the benefits of gamification methodology. In order to overcome the possible reluctance that any innovation may generate, it is especially appropriate to propose continuous training that allows teachers to personally experiment and evaluate such innovations. Similarly, it is important to determine the level of satisfaction and assessment attributed by teachers to these methodologies when they are applied in training and professional contexts.

METHOD

The research had a convergent mixed design. Quantitatively, it would fall within a quasi-experimental design, as the participants are not randomly distributed, and it attempts to demonstrate the existence of a causal relationship between two or more variables. It also falls within the qualitative approach or, in this case, a ‘quali-quantitative’ perspective (Aguilar et al., 2022) by using textual statistics and Bayesian networks applied to discourse.

The sample was selected by convenience, as they are participants that coincide with the population of the entirety of a postgraduate course on Educational Technology that is taught in an Ecuadorian institution as a requirement for their professional development. This sampling, according to Tamayo (2001), is used to obtain information quickly in the exploratory stages of the research or as a basis for generating hypotheses.

This is a group of 70 teachers (43 women and 27 men) from different educational levels (pre-school, primary, secondary and university) and subject areas, from different educational centers throughout the country. This group was divided into two subgroups, one with a morning timetable (group A, with 37 teachers, 22 women and 15 men) and the other with an afternoon timetable (group B, with 33 teachers, 21 women and 12 men), which facilitated the research design.

The question this research aims to answer is to what extent does the formative assessment methodology with gamified rubric impact on final assignment grades?

To this end, the following objectives are set:

  1. 1. Identify if there are differences in the results of the tasks (grades) when using formative assessment with gamified rubric.
  2. 2. To analyze the relationship between the final grades and the tasks set or badges received.
  3. 3. To check the satisfaction of the participants in the use of an assessment methodology with gamified rubrics.

Therefore, the study variables were five:

  • The tasks set.

  • The group: A (morning) or B (afternoon).

  • The badges collected on the digital rubric platform Corubric.com.

  • Satisfaction with the rubric.

  • The final grade.

Instruments

a) Gamified rubric

The rubric platform used was Corubric.com, designed for the adoption of the m-learning modality through access by invitation with QR codes, and the interface of environments adapted for mobile devices (tablets and smartphones). In this way, users can easily carry out assessments with their mobile devices at the click of a button anywhere and at any time. From a pedagogical point of view, the design of the platform allows for different modalities of formative assessment (team and group assessment, peer assessment, self-assessment, ipsative assessment, etc.), adapting flexibly to its various levels of achievement, evidence and indicators (each evidence can have different degrees of achievement and specific weights). The platform also has a way of assigning gamified values and points (see Figure 1) by exporting your data into a spreadsheet.


Figure 1
Image of the results of the gamified rubric to one of the tasks

b) Satisfaction questionnarie

To measure satisfaction with the rubric, a Likert-type scale is used, consisting of eleven items that assess the possibilities and demands it offers for evaluation:

Corubric allows for more objective assessment.

Corubric forces teachers to clarify their assessment criteria. Corubric makes it possible to make known what is expected. Corubric provides feedback on the development of the work.

Corubric helps us to understand the qualities that the work must possess. Corubric shows us how we will be evaluated.

Corubric allows us to evaluate ourselves.

Corubric informs us of the weighting of the components in relation to the total mark.

Corubric allows us to check the level of competence acquired. Corubric allows all groups to be evaluated equally.

Corubric provides evidence of the work done.

These items are rated with four response options: strongly disagree, disagree, agree and strongly agree. It also includes an open-ended question. The reliability of the scale, measured by Cronbach’s Alpha, is 0.87. Its content validity is ascertained from the appropriateness of the answers given by the participants in relation to the construct to be assessed (Pedrosa et al., 2013) and by calculating the average score that the participants as a whole give to all the items (see Table 4).

c) Innovation with gamification

After the subject teacher’s explanatory talk on the essential elements and indicators that an educational innovation project should have, and before working on the design of their educational innovation projects to be presented at the end of the course, the students were offered, by way of example, two articles published in indexed journals. These documents deal with reports of educational innovation projects for reading, assessment and discussion. To this end, two tasks were set, one for each article which, by reading and answering a knowledge questionnaire with closed questions, aimed to assess the identification, interpretation and understanding of the essential elements of any project of this nature. The two tasks were carried out at the same time, and after 30 minutes of reading individually or in teams, a questionnaire was answered on the article analyzed both individually and in teams.

The articles chosen may or may not show, in a clear or confusing way, some of the sections or elements that the educational innovation model taught in the course should have; therefore, they were not examples that would obtain the maximum score once the innovation project evaluation rubric had been applied.

The questionnaire reproduced the template explained on the essential elements and indicators in any educational innovation project. It was created on the course platform using closed questions with four answer options: one of them was correct, one was nonsense and two were closely related and possible. The results were immediately available through the system. The results were analyzed in a large class group, and the correct answers were given for each task, resolving doubts and generating a debate.

To motivate the completion of the exercises, a gamified rubric (Figure 1) was used according to the number of correct answers, individually or in teams, to the questionnaire in a given time. The wording of the two tasks was the same as below, except for a brief difference at the end regarding the receipt of badges:

«This exercise is intended to help you understand the sections of the project we are asking for at the end of the course. You have a template to guide you with questions. To understand this template and sections before planning the requested project, we are going to carry out a training exercise applying these sections to an already completed and published project, which is compulsory reading. To check if you have understood the basics of the project, answer the questions below».

In the case of the application of competitive badges (competition between teams) the following was added at the end of the text: «The first five with the highest marks will be awarded a badge». In the case of the application of non-competitive badges (competition with oneself), the previous paragraph was completed by stating that «a badge will be awarded to all those who pass the questionnaire».

In summary, the sequence of application of the gamified rubric with competitive and non-competitive badges in each group and task is shown in Table 1 below:

Table 1
Distribution of tasks between the groups

Data analysis

Correlational analysis was used to answer the research questions from a frequentist approach, using the statistical program IBM SPSS Statistics 26. To better understand the data, we used a type of advanced statistical analysis that helps to identify patterns and make decisions based on multiple factors simultaneously. Through the analysis of Bayesian networks, using the open source programme OpenMarkov (Arias et al., 2019), we obtained a decision tree that allows us to visualise the probabilities of occurrence of conditional events based on the automatic learning of the data obtained in the research. These networks are appropriate for modelling multivariate systems aimed at classification, diagnosis and decision-making in various fields (López-Puga, 2012).

Finally, textual statistics were used to analyze the open-ended responses to the satisfaction questionnaire by generating minimum units of analysis. For this purpose, we used the similarity analysis technique based on the concurrence tables (Benzécri, 1982) of the IRAMUTEQ program (R Interface for Multidimensional Analysis of Texts and Questionnaires), developed at the University of Toulouse (Ratinaud & Marchand, 2012) under GNU licence, and frequently used in the field of education (Aguilar et al., 2022).

RESULTS

When studying the relationship between variables, we found that there is no correlation between the final grade and the results of tasks 1 and 2 (Sig.=.982 and Sig.=.421 respectively), but there is a correlation between the scores of these tasks. Table 2 shows that there are significant differences between the scores for task 1 and task 2 with a 95% confidence interval for the difference.

Table 2
Paired samples test between task 1 and task 2 scores

Furthermore, although the T-test seems to indicate differences in final grades according to group (Table 3), when the power of the test is calculated, it is found to be low (61%) and the possibility of making a type II error is high (0.39).

To what extent are the variables studied related?

Given the inconclusive results obtained by means of frequentist statistics, the research team is presented with the option of resituating its approach towards more inductive positions and using all the information gathered to construct an artifice that allows, based on the empirical knowledge acquired, inferring well-founded explanations based on experience and the previous information available. All this is possible thanks to Bayesian statistics.

The algorithms implemented in the OpenMarkov tool (Arias et al., 2019) make it possible to create a Bayesian network (Figure 2) that visualizes with an arrow the relationships between the different variables under study (the tasks, the group, the badges and the grades). Its meaning indicates the conditioning of the probabilities. The algorithms used, Hill Climbing and K2, detect and make these relationships visible. The Bayesian network constructed (Figure 2) shows that the probabilities of obtaining one or the other final mark are conditioned by the number of badges and, to a greater extent, by the mark for task number one, since it shows greater uniformity in the probabilities depending on the values than task number two.

Table 3
Results of the Levene’s test and T-test


Figure 2
Bayesian network of total variables

One of the great advantages offered by this type of knowledge network, a priori, is the possibility of making queries and interpreting the results it offers. For example, if we set the final score at eight, which is the lowest score, we see what probabilistic values the rest of the variables take. With that score, the probability of belonging to group A is 94.4%, while being part of group B is only 5.5% (Figure 3). Also, 95.8% of the cases have obtained a badge in task 2, while 86.1% have no badges in task 1.


Figure 3
Bayesian network on the assumption «rating=8»

In addition, the assumptions can be expanded to obtain more detail, as shown in Figure 4, where the previous values (in red) are compared with those resulting from setting the final mark at ten (in blue), which allows differences to be noted. It can be seen that, with a ten in the final grade, the probability of belonging to one group or the other is quite similar, i.e. this score is obtained in both groups A and B with similar values. On the other hand, getting one or the other mark in task 2 does not influence the final grade, unlike task 1. The possibility of nuancing in the way it has been done is one of the great advantages of using Bayesian statistics, which is more inductive, over frequentist statistics, a predominantly deductive model. Differences that are not clearly visible in Table 3 above are shown in Figure 4.


Figure 4
Bayesian network under the assumption «rating=8» and «rating=10»


Figure 5
Bayesian network based on badges

As far as the badges are concerned, it can be seen (Figure 5) that it is the badge for task 1 that has the highest weight in the final grade. With a badge in this task, the probability of getting a final mark of ten is 60.5%.

How satisfied are users with the gamified rubric?

Satisfaction with the rubric is highly agreed upon by the participants (Table 4), given that the lowest average score is 3.36 out of 4 points for the item ‘Corubric provides us with feedback on the development of the work’. In particular, the highest values highlight that the tool allows ‘a more objective evaluation’ (3.56) and ‘makes known what is expected’ (3.54), ‘evidences the work done’ (3.53) and ‘helps to understand the qualities that the work should possess’ (3.50).

Table 4
Descriptive statistics on satisfaction with the gamified rubric

When asked the open-ended question ‘Do you recommend me to continue using Corubric?’, the response was overwhelmingly positive (97.14%), with only 2 negative answers. Within the textual statistical analysis, the technique to which this response was subjected was the analysis of similarities based on the tables of concurrence (Benzécri, 1982). The graph resulting from this analysis (Figure 6), which considers the co-occurrence between words, their frequency and their proximity, shows this frequency through the size of the words and the degree of co-occurrence through the thickness of the lines joining them, while the grouping or proximity is indicated by a colored halo surrounding the group. In this case, four main clusters are visible. All of them pivot on the concept of assessing by rubric, what it facilitates and its difficulties. It stands out that:

  • It allows a quick, clear and easy evaluation.

  • It allows us to evaluate competencies, to know the objective and the criteria, as well as to improve the teaching performance.

  • It is a digital assessment and student support tool.

  • It requires the Internet and the use of a specific program.

  • It is complex and time-consuming to develop

  • Lack of knowledge and information.


Figure 6
Graphical representation of participants’ opinions

DISCUSSION AND CONCLUSIONS

Given that competency-based assessment is still a complicated issue for teachers, and the introduction of innovation projects in classroom contexts is more than a necessary training, it is of interest to combine aspects and variables, such as collegial collaboration, comprehensive and shared assessment, practices to be improved, together with the analysis of indicators to assess these practices with digital rubrics. Thus, it is appropriate to train teachers in the use of methodologies that favor the exchange of good practices and their evaluation with more active strategies, such as gamification, and a differentiated design of situations and tasks in training (with and without gamified badges in these rubrics). Such task designs may be oriented towards assessment on the average score of the group, and at other times on specific standards. However, it is interesting to design tasks whose ultimate purpose is a self-evaluation that allows learning from the evaluation process itself, as is the case of the variables in this study, which allow a better understanding of the limitations and scope, so as to stimulate intrinsic motivation for improvement and improvement of these practices, as well as the exchange and collegial analysis of them.

In some ways, this study goes a step further in the use of technologies in general, and more specifically of digital rubrics, to improve the objectivity of assessment and the immediacy of feedback (Maraza-Quispe, 2024; Bouchrika et al., 2019), analyzing how they can help in self-assessment and measure the effect of competitiveness in a more objective way for formative assessment.

Rubrics are valuable resources for this analysis of the practices themselves, as they are ideal for sharing evaluation criteria and their application under the understanding of indicators; which is also an appropriate methodology for the ongoing training of teachers, as well as other teacher competencies, as is the case in practices and decision-making in school management. For example, Tobon et al. (2020) found that with the use of the rubric, principals’ practices achieved a more reliable self-assessment, with greater construct and content validity. According to Rodríguez-Gallego et al. (2019), for school leaders, work based on collective and collegiate action projects is the best approach for innovation and improvement, and the most valued evaluation methodology for change is for them the introduction of a more qualitative one through rubrics.

Repeated use of rubrics allows teachers to practice skills related to evaluative competence. However, in response to the first objective of the study, we have found that it is not so much the rubric as the content of the task that marks the differences between groups.

This study allows us to verify, in accordance with the second objective, that the gamified rubric assessment methodology presents different results among the tasks with final grades directly related to the badges received (Figure 2); therefore, it is a good methodology for debate and discussion among trainee teachers, as has been demonstrated in other studies (Franco-Mariscal et al., 2021), although with different objectives.

Gamification has made it possible to generate greater feedback on the evaluation topic, and its methodology using rubrics has obtained a high score on participant satisfaction, in response to the third objective of the study, as was obtained when this same instrument was applied in the use of rubrics in other studies (Cebrián-de-la-Serna, 2018a; Pérez-Torregrosa et al., 2022b). The general opinion of the participants about the rubric is very positive and coincides, as all the items of the satisfaction questionnaire have a score higher than 3 on a 4-point scale.

The application of Bayesian statistics, which is inductive in nature, has made it possible to refine the results obtained with frequency analyses, which are fundamentally deductive. This methodology, as Sarmiento and Ocampo (2022) point out, enables a deeper understanding of the problem under study. Bayesian statistics, by incorporating uncertainty into the model through a priori distributions, offers a flexibility that allows for capturing finer details in the data. In addition, updating beliefs as new information becomes available, an inherent feature of the Bayesian approach, facilitates adaptation to changes and trends in the context of study. In summary, Bayesian statistics provides a robust and flexible framework for understanding and analyzing complex problems.

Once again, the valuable opportunity that technology provides for the assessment of learning has become evident. In this case, the digital rubric has been used as a resource on the Corubric platform, which has proven to be an effective tool for this purpose. As in other studies (Raposo-Rivas & Cebrián-de-la-Serna, 2019; Putz et al., 2020), it has been observed that the feedback has been more interactive and instantaneous, which facilitates more dynamic and participatory learning by teachers. In addition, activities have been perceived as more motivating, which may increase engagement and participation. Finally, the management of assessment data has been simpler and faster, which optimizes the teacher’s time and allows for a more efficient monitoring of participants’ progress. In summary, the integration of gamified digital rubric technology in learning assessment offers numerous advantages that can improve the quality of education.

The study offers a double innovative approach: on the one hand, the use of technologies such as the gamified digital rubric, which allows teachers to check how it affects collaboration or competition within the group in a quick and effective way thanks to digital feedback. On the other hand, their methodology addresses the relevant issue of how teachers should support each other or see each other as rivals, and to what extent, how to measure the effect and aspects of competitiveness in an objective way, especially in a profession like teaching where teachers should act collegially rather than as rivals; at the same time, they gain enough experience to be able to replicate such methodology with their students.

As in our study, research by Hill et al., (2022) found that more than half of the students found the badges useful and increased their recognition in the development of certain skills, as well as improved understanding of the purpose of the tasks, increased motivation and satisfaction.

Competitive and non-competitive badges are underpinned by the theory of intrinsic and extrinsic motivation, and can foster an active and dynamic learning environment by striving for badges that demonstrate mastery of certain skills or knowledge. However, a balance must be struck between the necessary collaboration and collegiality in education to avoid creating an overly competitive environment that can be counterproductive. This challenge will always exist, hence, this study is presented by contrasting the two types of badges in an educational framework where deep reflection predominates, in a self-critical, positive and productive environment, which facilitates reflection on teaching.

Having reached these conclusions, we believe that we still need to establish more studies that will allow us to go further, as Moral-Santaella et al. (2021) point out:

If we really want the training period to be an effective period, and not merely a period of learning by observation, we must continue to deepen in the construction of (...) paths that guide reflection on teaching competences, which will ensure and guarantee the improvement of the quality of teacher training programs (p.41).

LIMITATIONS AND FUTURE DIRECTIONS

At the heart of initial and ongoing training on active methodologies is the promotion of a change of mentality in teachers with regard to gamification. For this reason, and we will consider this for another occasion, a pre and post on the attitude and assessment of gamification in teaching could be carried out. Not only before, during or at the end of the course, but also after a longer period of time, so that we can find out what real impact this training has on their classes.

One of the limitations detected by the participants was the difficulty in using the Corubric tool, probably due to its specialization. Therefore, in future work, a more user-friendly tool could be used, one that is more generalized and more familiar to users, such as Google Forms and Office.

In addition, the gamified rubric was only used in two tasks and allowed students to develop both individual and teamwork. A possible alternative is the analysis between these grouping modalities, controlling the variables ‘individual work’ and ‘group work’. Similarly, the gamified rubric could be integrated into a fully gamified classroom dynamic, not only in the assessment.

Although the satisfaction instrument could be validated by groups of experts, this study confirmed its high reliability (Cronbach’s Alpha = 0.87), which makes it possible to apply it to other contexts, and to extend the sample for generalization. Being a small sample, it has a significant distribution across different educational and professional levels of active teachers and postgraduate trainees, a population that is not very high in the study country for socio-economic reasons, and which represents different geographical areas. In any case, the convenience sample is a limitation to be considered for the generalizability of the results. Despite the high reliability achieved in one of the instruments, it is close to that obtained in other studies where the number of the sample was more extensive as it was a MOOC format training program (Lemos et al., 2019), using the same instrument, but adapted to this context.

In conclusion, the study presented, based on knowledge situated in a specific context (assessment with a gamified electronic rubric in in-service teacher training) shows various questions and results that could be implemented in different realities.

Acknowledgments

[1] This article is the result of the application of the methodologies and tools designed and created in the R+D+I project Study of the impact of federated erubricas on the evaluation of competencies in the practicum (2015-2017). National R+D+I Excellence Plan (2015-2017) nº EDU2013-41974-P.

REFERENCES

Aguilar, E. T., Brandalise, M. Â. T., & Silva, G. C. (2022). La utilización de Iramuteq en investigaciones educativas: una perspectiva cualicuantitativa para el análisis de datos textuales. Studies in Education Sciences, 3(3), 1059–1069. https://doi.org/10.54019/sesv3n3-004

Arias, M., Pérez-Martín, J., Luque, M., & Díez, F. J. (2019). OpenMarkov, an Open-Source Tool for Probabilistic Graphical Models. International Joint Conference on Artificial Intelligence, 6485-6487. https://doi.org/10.24963/ijcai.2019/931

Benzécri, J. P. (1982). Histoire et préhistoire de l’Analyse des Donées. DUNOD.

Barna, B., & Fodor, S. (2017). An empirical study on the use of gamification on IT courses at higher education. En M. Auer, D. Guralnick & I. Simonics (Eds.), Teaching and Learning in a Digital World: Proceedings of the 20th International Conference on Interactive Collaborative Learning–Volume 1 (pp. 684–692). Springer International Publishing. https://doi.org/10.1007/978-3-319-73210-7_80

Basgall, L., Guillén-Gámez, F. D., Colomo-Magaña, E., & Cívico-Ariza, A. (2023). Digital competences of teachers in the use of YouTube as an educational resource: analysis by educational stage and gender. Discover Education, 2(1), 28. https://doi.org/10.1007/s44217-023-00054-x

Bouchrika, I., Harrati, N., Wanick, V., & Wills, G. (2019). Exploring the impact of gamification on student engagement and involvement with e-learning systems. Interactive Learning Environments, 29(8), 1244–1257. https://doi.org/10.1080/10494820.2019.1623267

Cebrián de la Serna, M., & Bergman, M. (2014). Evaluación formativa con e-rúbrica: aproximación al estado del arte. REDU. Revista de docencia universitaria, 12(1), 15-22. http://dx.doi.org/10.4995/redu.2014.6427

Cebrián-de-la-Serna, M. (2018a). Modelo de evaluación colaborativa de los aprendizajes en el prácticum mediante Corubric. Revista Practicum, 3(1), 62-79. https://doi.org/10.24310/RevPracticumrep.v3i1.8275

Cebrián-de-la-Serna, M. (2018b). Metodologías para la evaluación de competencias en el diseño de proyectos de innovación educativa con tecnologías. En T. Linde & R. Pérez, Metodología colaborativas a través de las tecnologías: hacia una evaluación equitativa (pp. 5-14). Universidad de Málaga. https://acortar.link/u43ReR

Cebrián-Robles, D., Cebrián-de-la-Serna, M., Gallego-Arrufat, M. J., & Quintana-Contreras, J. (2017). Impacto de una rúbrica electrónica de argumentación científica en la metodología blended-learning. RIED. Revista Iberoamericana de Educación a Distancia, 21(1), 75-94. http://revistas.uned.es/index.php/ried/article/view/18827

Chacón, J. P., Moreno, J. L. M., & Alonso, Á. S. M. (2015). Los imponderables de la tecnología educativa en la formación del profesorado. Revista latinoamericana de tecnología educativa, 14(3), 11–22. https://doi.org/10.17398/1695-288X.14.3.11

Fernández Medina, C. R., Luque Guerrero, C. R., Ruiz Rey, F. J., Rivera Rogel, D. E., Andrade Vargas, L. D., & Cebrián de la Serna, M. (2021). Evaluación de la competencia oral con rúbricas digitales para el Espacio Iberoamericano del Conocimiento. Pixel-Bit. Revista de Medios y Educación, 62, 71–106. https://doi.org/10.12795/pixelbit.83050

Franco-Mariscal, A. J., Cebrián-Robles, D., & Rodríguez-Losada, N. (2021). Impact of a Training Programme on the e-rubric Evaluation of Gamification Resources with Pre-Service Secondary School Science Teachers. Technology, Knowledge and Learning, 28(2), 769-802. https://doi.org/10.1007/s10758-021-09588-1

García-Vera, A. B., Rayón Rumayor, L., & De la Heras Cuenca, A. M. (2020). Use of Photo-elicitation to evoke and solve Dilemmas that prompt changes Primary School Teachers’ Visions. Journal of New Approaches in Educational Research, 9(1), 137–152. https://doi.org/10.7821/naer.2020.1.499

Greaves, R., & Vlachopoulos, D. (2023). El uso de la gamificación como vehículo de intercambio pedagógico para el desarrollo profesional del profesorado. RIED. Revista Iberoamericana de Educación a Distancia, 26(1), 245–264. https://doi.org/10.5944/ried.26.1.34026

Hennessy, S., D’Angelo, S., McIntyre, N., Koomar, S., Kreimeia, A., Cao, L., Brugha, M., & Zubairi, A. (2022). Technology Use for Teacher Professional Development in Low- and Middle-Income Countries: A systematic review. Computers and Education Open, 3, 100080. https://doi.org/10.1016/j.caeo.2022.100080

Hill, M. A., Overton, T., Kitson, R. R., Thompson, C. D., Brookes, R. H., Coppo, P., & Bayley, L. (2022). ‘They help us realise what we’re actually gaining’: The impact on undergraduates and teaching staff of displaying transferable skills badges. Active Learning in Higher Education, 23(1), 17-34. https://doi.org/10.1177/1469787419898023

Lemos de Carvalho Junior, G., Cebrián-Robles, D., Cebrián-de-la-Serna, M., & Raposo-Rivas, M. (2019). Comparative study SPOC vs. MOOC for socio-technical contents from usability and user satisfaction. Turkish Online Journal of Distance Education, 20(2), 3-20. https://doi.org/10.17718/tojde.557726

López-Puga, J. (2012). Cómo construir y validar redes bayesianas con Nética. Revista Electrónica de Metodología Aplicada, 17(1), 1-17. https://doi.org/10.17811/rema.17.1.2012

Marín, V. I., & Pérez Garcias, A. (2016). Collaborative e-Assessment as a Strategy for Scaffolding Self-Regulated Learning in Higher Education. En S. Caballé & R. Clarisó (Eds.), Formative Assessment, Learning Data Analytics and Gamification (pp. 3–24). Academic Press. https://doi.org/10.1016/B978-0-12-803637-2.00001-4

Manzano-León, A., Camacho-Lazarraga, P., Guerrero, M. A., Guerrero-Puerta, L., Aguilar-Parra, J. M., Trigueros, R., & Alias, A. (2021). Between Level Up and Game Over: A Systematic Literature Review of Gamification in Education. Sustainability: Science Practice and Policy, 13(4), 2247. https://doi.org/10.3390/su13042247

Maraza-Quispe, B. (2024). Impact of the use of gamified online tools: A study with Kahoot and Quizizz in the educational context. International Journal of Information and Education Technology (IJIET), 14(1), 132–140. https://doi.org/10.18178/ijiet.2024.14.1.2033

Marcelo, C., & Marcelo, P. (2021). Educational influencers on Twitter. Analysis of hashtags and relationship structure. Comunicar, 29(68), 73–83. https://doi.org/10.3916/c68-2021-06

Moral-Santaella, C., Ritacco-Real, M., & Morales-Cabezas, J. (2021). Comprobando la eficacia de materiales reflexivos sobre competencias profesionales docentes. Un estudio de investigación-acción. Revista Practicum, 6(1), 26-43. https://doi.org/10.24310/RevPracticumrep.v6i1.10230

Murillo-Zamorano, L. R., Sánchez, J. Á. L., Godoy-Caballero, A. L., & Muñoz, C. B. (2021). Gamification and active learning in higher education: is it possible to match digital society, academia and students’ interests? International Journal of Educational Technology in Higher Education, 18(1), 1–27. https://doi.org/10.1186/s41239-021-00249-y

Nair, S., & Mathew, J. (2021). Evaluation of gamified Training A Solomon four group analysis of the impact of gamification on learning outcomes. Techtrends, 65(5), 750-759. https://doi.org/10.1007/s11528-021-00651-3

Pedrosa, I., Suárez-Álvarez, J., & García-Cueto, E. (2013). Evidencias sobre la validez de contenido: avances teóricos y métodos para su estimación. Acción Psicológica, 10(2), 3-18. https://dx.doi.org/10.5944/ap.10.2.11820

Pérez-Torregrosa, A. B., Cebrián-Robles, V., & Cebrián-de-la-Serna, M. (2022a). ¿Qué hemos aprendido sobre la evaluación de rúbricas digitales en los aprendizajes universitarios? En G. Merma-Molina & D. Gavilán-Martín (Eds.), Investigación e innovación en el contexto educativo desde una perspectiva colectiva (pp. 229-240). Dykinson. https://acortar.link/3nwug2

Pérez-Torregrosa, A. B., Gallego-Arrufat, M. J., & Cebrián-de-la-Serna, M. (2022b). Digital rubric-based assessment of oral presentation competence with technological resources for preservice teachers. Estudios Sobre Educación, 43, 177–198. https://doi.org/10.15581/004.43.009

Pozo-Sánchez, S., Lampropoulos, G., & López-Belmonte, J. (2022). Comparing Gamification models in higher education using face-to-face and virtual escape rooms. Journal of New Approaches in Educational Research, 11(2), 307-322. https://doi.org/10.7821/naer.2022.7.1025

Putz, L. M., Hofbauer, F., & Treiblmaier, H. (2020). Can gamification help to improve education? Findings from a longitudinal study. Computers in Human Behavior, 110, 1–12. https://doi.org/10.1016/j.chb.2020.106392

Ratinaud, P., & Marchand, P. (2012). Application de la méthode ALCESTE à de «gros» corpus et stabilité des «mondes lexicaux»: analyse du «CableGate» avec IRaMuTeQ. En Actes des 11eme Journées internationales d’Analyse statistique des Données Textuelles, JADT 2012 (pp. 835-844). Liège. https://acortar.link/bwukPa

Raposo-Rivas, M., & Cebrián-de-la-Serna, M. (2019). Technology to Improve the Assessment of Learning. Digital Education Review, 35, 1–13. https://doi.org/10.1344/der.2019.35.%25p

Rodríguez-Gallego, M. R., Ordóñez Sierra, R., & López-Martínez, A. (2019). La dirección escolar: Liderazgo pedagógico y mejora escolar. Revista de Investigación Educativa, 38(1), 275–292. https://doi.org/10.6018/rie.364581

Ruiz-Rey, F. J., Cebrián-Robles, V., & Cebrián-de-la-Serna, M. (2021). Redes profesionales en tiempo de Covid19: compartiendo buenas prácticas para el uso de TIC en el prácticum. Revista Practicum, 6(1), 7-25. https://doi.org/10.24310/RevPracticumrep.v6i1.12283

Sarmiento, J. A., & Ocampo, C. I. (2022). Enfoques Frecuentista y Bayesiano en el Estudio del Plagio Académico. Una Propuesta Innovadora en Investigación Educativa. REICE. Revista Iberoamericana Sobre Calidad, Eficacia y Cambio En Educación, 21(1), 139–158. https://doi.org/10.15366/reice2023.21.1.007

Tamayo, G. (2001). Diseños muestrales en la investigación. Semestre Económico, 4(7), 1-14.

Tobon, S., Juárez-Hernández, L. G., Herrera-Meza, S. R., & Núñez, C. (2020). Evaluación de las prácticas directivas en directores escolares: validez y confiabilidad de una rúbrica. Educación XX1, 23(2), 187-210. https://doi.org/10.5944/educxx1.23894

Xu, J., Lio, A., Dhaliwal, H., Andrei, S., Balakrishnan, S., Nagani, U., & Samadder, S. (2021). Psychological interventions of virtual gamification within academic intrinsic motivation: A systematic review. Journal of Affective Disorders, 293, 444-465. https://doi.org/10.1016/j.jad.2021.06.070

Zainuddin, Z., Chu, S. K. W., Shujahat, M., & Perera, C. J. (2020). The impact of gamification on learning and instruction: A systematic review of empirical evidence. Educational Research Review, 30, 100326. https://doi.org/10.1016/j.edurev.2020.100326

Información adicional

How to reference this article: Cebrián de la Serna, M., Raposo-Rivas, M., Cebrián-Robles, V., & Sarminento-Campos, J.A., (2025). mpact of gamified rubrics in teacher training. Educación XX1, 28(1), 313-336. https://doi.org/10.5944/educxx1.39457

Información adicional

redalyc-journal-id: 706



Buscar:
Ir a la Página
IR
Visor de artículos científicos generados a partir de XML-JATS por