Estudios e Investigaciones

Recepción: 01 Diciembre 2024
Aprobación: 18 Febrero 2025
Publicación: 01 Julio 2025
DOI: https://doi.org/10.5944/ried.28.2.43288
Abstract: This study presents the validity and reliability in the creation of an instrument designed to evaluate the perceptions of teachers and pedagogues in training towards the integration of Artificial Intelligence in tasks related to their teaching profession, taking into account intrinsic factors such as the attitude towards its responsible use, the level of creativity in the creation of didactic material with these tools, the associated enjoyment in the use of these tools, and the level of anxiety when facing the learning of this emerging technology in their academic training and its relevance in their future labor market. A non-experimental ex post facto design was used through surveys with a non-probabilistic sampling by convenience, with a total of 548 teachers and pedagogues in training from faculties of Education Sciences in Spain. Reliability and validity measures were used for the elaboration of the instrument. Regarding reliability, Cronbach's Alpha, Spearman-Brown Coefficient, Guttman's Two Halves and composite reliability were used. Regarding validity, comprehension, construct, convergent and discriminant validity were used. The results showed a highly satisfactory reliability, and in terms of validity, a good model fit was observed. The final version of the instrument consists of 25 items classified in five latent factors.
Keywords: artificial intelligence, teachers in training, pedagogues in training, psychometric instrument.
Resumen: Este estudio presenta la validez y fiabilidad en la creación de un instrumento diseñado para evaluar las percepciones de docentes y pedagogos en formación hacia la integración de la Inteligencia Artificial en tareas relacionadas con su profesión docente, teniendo en cuenta factores intrínsecos como la actitud hacia su uso responsable, el nivel de creatividad en la creación de material didáctico con estas herramientas, el disfrute asociado en el uso de estas herramientas, y el nivel de ansiedad al enfrentarse al aprendizaje de esta tecnología emergente en su formación académica y su relevancia en su futuro mercado laboral. Fue utilizado un diseño no experimental ex post facto a través de encuestas con un muestreo no probabilístico por conveniencia, con un total de 548 docentes y pedagogos en formación de facultades de Ciencias de la Educación del territorio español. Para la elaboración del instrumento, se utilizaron medidas de fiabilidad y validez. Respecto a la fiabilidad, fueron utilizados los índices Alfa de Cronbach, Coeficiente Spearman-Brown, Dos Mitades de Guttman y fiabilidad compuesta. Respecto a la validez, se utilizaron la validez de comprensión, constructo, convergente y discriminante. Los resultados demostraron una fiabilidad altamente satisfactoria, y en términos de validez se observó un buen ajuste del modelo. La versión final del instrumento consta de 25 ítems clasificados en cinco factores latentes.
Palabras clave: inteligencia artificial, docentes en formación, pedagogos en formación, instrumento psicométrico.
INTRODUCTION
For several decades, there has been a growing interest among the population about the possibilities that digital technologies can offer in the educational field, with Artificial Intelligence (AI) standing out in particular as one of the most promising innovations (Şahín, 2024). Authors such as Lambert and Stevens (2023) stress that AI refers to a group of computational systems designed to learn and make predictions. However, the AI that has really captured attention in recent years is the so-called “generative artificial intelligence” (GenAI) which can create new and original content such as texts, images, presentations, audios and videos from prompts (Alenezi et al., 2023; González-Mayorga et al., 2024).
GenAI is proving to be a tool with transformative potential in multiple domains, including education. Indeed, researchers Ng et al. (2021) stress that AI “potentially becomes one of the most important technological skills of the 21st century” (p. 2), mainly “since the launch of ChatGPT in November 2022, which brought the concept of generative AI to public attention and sparked growing interest in its potential impact on education” (Yu and Guo, 2023, p.1). In this context, thanks to AI, “educators can leverage personalized learning experiences, adaptive content generation, and real-time support for students” (Ruiz-Rojas et al., 2023, p. 1).
However, AI integration requires rigorous planning and appropriate training to ensure effective and responsible use (Gocen and Aydemir, 2020; Hwang and Chen, 2023). Nevertheless, if the goal of teachers is to prepare students for the challenges of the 21st century labor market, it is imperative to foster the development of skills, knowledge, and attitudes in AI and specifically in GenAI that will enable them to adapt and thrive in a highly dynamic marketplace (Farrelly and Baker, 2023). Therefore, educational institutions must adjust and adapt their programs in an agile manner to incorporate the use of AI in all areas of knowledge (Bellas et al., 2023). A series of questions arise: Are teachers and pedagogues in training adequately trained in AI and GenAI to face the challenges of their professional field? To answer these questions, it is necessary to ask a previous question: Are there measurement instruments that assess the responsible use and possible intrinsic factors of teachers and pedagogues-in-training in relation to the use of AI and GenAI and their impact on their future work?
Although advances have been made in recent years on educational measurement in AI, there is a notable scarcity of psychometric instruments in the scientific literature focused on teachers and pedagogical trainees. To support this assertion, a literature search for AI assessment instruments was carried out under the following criteria: 1) that the instruments had been published in the last 2-3 years, since this is when the use of AI and GenAI has emerged the most; 2) that the instruments are focused on teachers and pedagogues in training or practicing teachers, since both belong to the same group; and 3) that the instruments are reliable and valid, either through expert validation or construct validation (exploratory or confirmatory factor analysis, EFA and CFA). Table 1 reflects the existence of several instruments, each with a specific structure and taxonomy. From the review of the studies presented, it is observed that none of these studies is focused on teachers and pedagogues in training, except for one of them (Espinoza-San et al., 2024), which lacks psychometric properties.

As can be seen in the previous table, there is a lack of tools to evaluate the responsible use made by teachers and pedagogues in training. There is also a lack of intrinsic factors, such as creativity for the creation of didactic material and its planning in didactic tasks, the enjoyment associated with the use of emerging technological applications, or, on the other hand, the fear-stress they may feel in their own learning and use of GenAI applications. The purpose of this article is to create a new psychometric instrument, with a unique approach by focusing on these specific aspects and on a group that has been little researched so far.
THEORETICAL FRAMEWORK ON THE TAXONOMY OF THE INSTRUMENT'S FACTORS
The use of AI and GenAI by pre-service and in-service teachers is conditioned by a wide variety of factors that play a crucial role (Tiwari et al., 2024). First of all, one of the main factors is the responsible use of this technology (Aler et al., 2024).While "offering great opportunities, AI systems also generate certain risks that need to be managed in an appropriate and proportionate manner" (European Commission, 2019, p.4), so it is necessary that "students should be trained on responsible use and ethical guidelines" (Hasanein & Sobaih, 2023, p. 2609). We agree with the reflections of Brandão et al. (2024) in stating that there are a number of students who probably use IAG tools due to their accessibility and free use in extracurricular contexts. Therefore, we consider that the use of these technologies requires not only a technical understanding, but also an ethical reflection. The use of AI and GenAI requires teachers and pedagogues in training on the responsible use of AI and IAG, taking into account the demands and transformations of their future labor markets.
Secondly, according to Uzumcu and Acilmis (2024), the statement “innovators are creative and entrepreneurial people willing to take risks and open to new ideas” (p. 1112) fits perfectly with the role of teachers in the integration of artificial intelligence in their pedagogical practices. Educators who stand out for their creativity and entrepreneurial mindset are key to harnessing the potential of ICT (Alemany et al., 2021). In our case, the use of AI and GenAI technologies "could help them to be creative in their practice" (Chounta et al., 2022, p. 735). Instead of limiting themselves to using AI and GenAI as a simple task facilitator, this typology of teachers will have an opportunity to design active and personalized methodologies that enhance the learning experience (Kaouni et al., 2023). Consequently, AI could help to contribute “to the achievement of the fourth SDG (Sustainable Development Goals) proposed by the UN (United Nations) by promoting inclusive, equitable and quality education. This openness to new possibilities and willingness to experiment creatively will allow AI and GenAI to become dynamic tools, capable of enriching the classroom and fostering the autonomy not only of the teacher but also of the students (Mohamed et al., 2024).
A third concept is digital flow, defined as the enjoyment associated with the use of technology. Digital flow theory refers to a state of deep immersion and total concentration in a digital activity, where the user experiences a combination of enjoyment and intrinsic interest (Guillén-Gámez et al., 2023). This concept was adapted from Mihaly Csikszentmihalyi's flow theory, which originally described “flow” as a state of mind in which a person is completely absorbed in a task (Csikszentmihalyi et al., 2014). In other words, this is “a psychological state in which the person feels simultaneously cognitively efficient, motivated, and happy” (Moneta and Csikszentmihalyi, 1996, p. 277). In the digital context, flow refers to activities performed with digital technologies or platforms, in which people feel high satisfaction while interacting with these tools (Zhan et al., 2024). In the context of this research, teachers and pedagogical trainees who enjoy the use of generative AI will achieve two key benefits: first, they will strengthen their digital skills, which will allow them to excel in a highly competitive and demanding labor market in digital skills.
Finally, these teaching populations may experience stress or anxiety when learning to use AI and GenAI in educational contexts, mainly due to the impact of these emerging technologies on employment rates and social life (Hopcan et al., 2024). Anxiety related to GenAI can be described as the uneasiness or fear that some people experience about the potential negative effects and risks that may arise with the use of these technologies in different social sectors (Wang and Wang, 2022). In the educational context, there are several reasons why educators may feel anxiety about AI. On the one hand, the idea that it may replace teachers, which would lead to a loss of jobs and to a decrease in educational quality (Wang et al., 2022). On the other hand, some teachers fear that AI systems may not be able to capture key aspects of teaching, such as building relationships with students and personalizing learning (Ouyang et al., 2022). However, there is a third reason that has been little explored so far in the scientific literature: the possibility that teachers and pedagogues-in-training may not possess and acquire during the educational degree the necessary skills to use AI and GenAI, which could limit their ability to compete effectively in the labor and educational market.
Taking into consideration this previous theoretical framework, the objective is to validate a psychometric instrument to assess both the responsible use of AI and GenAI by teachers and pedagogues in training, as well as intrinsic factors such as attitude, creativity, enjoyment (digital flow) and anxiety towards the integration of these technologies in tasks related to their teaching profession.
METHOD
Design and type of sampling
To achieve the objectives of this study, a non-experimental design (ex post facto) was employed to evaluate the psychometric properties (reliability and validity) of the instrument designed. Data were collected through direct contact by the authors with their own students, as well as with other students from their universities and from other institutions offering teaching and/or pedagogy degrees. The sample was selected in a non-probabilistic manner during the months of September and October 2024, following a criterion of convenience, given the direct access to these groups. Participation was voluntary and anonymous, guaranteeing informed consent prior to data collection. The collection process was carried out during class time, with prior authorization from the teachers responsible for the subjects in which the students were enrolled. The purpose of the study was explained to the participants, and they were given clear instructions on how to complete the instrument. The administration of the questionnaire was carried out in digital format through an online platform (Google Forms), which allowed for collecting responses in an efficient and systematized manner. The final sample consisted of a total of 548 teachers and pedagogues in training. Table 2 shows the distribution of participants according to various demographic variables.

Procedure in the elaboration of the instrument
The instrument was created after reviewing the literature on the responsible use of AI by teachers and pedagogues in training, as well as intrinsic factors such as the level of creativity, technological enjoyment (digital flow) and stress towards the integration of AI in tasks related to their teaching profession. Having found few instruments on the subject, it was decided to design and create a psychometric instrument. A seven-point Likert scale was used, where the value one was associated with the label “strongly disagree” while the value seven was associated with “strongly agree”.
After reviewing the literature and developing a set of 41 items, we took into account the recommendations of Hair et al. (2010), which suggest collecting a sample between five and ten times the number of items in the questionnaire to analyze its psychometric properties. In this study, the initial items were 41, resulting in a ratio of 13.37, well above this recommendation. For the subsequent procedures, the guidelines recommended by Pérez and Carretero-Dios (2005) were followed: comprehension validity, construct validity, convergent and discriminant validity and reliability. The SPSS V24 and AMOS V24 software belonging to the IBM company was used for the development of the statistical analyses.
RESULTS
Comprehension validity: statistical analysis of items
Considering that the study of the normality of the data through Kolmogorov-Smirnov is very sensitive to small deviations in Likert-type scales, it was not considered as the only criterion of normality since it did not withstand its contrast (p. < .05). Instead, the criterion of Pérez and Medrano (2010) was followed to assess the validity of comprehension, who suggest that items with skewness and kurtosis between ±1.5 are adequate, along with those with a standard deviation greater than 1 (Meroño et al., 2018).
Table 3 shows the instrument items organized according to their respective factors. The descriptive results indicated that all items have adequate validity of comprehension, since their values were within the established limits, except for item DIM2.1 which exceeded the recommended thresholds according to Pérez and Medrano (2010), therefore, it was eliminated.

After verifying the values of dispersion, skewness and kurtosis, we checked whether the reliability of each item increased or decreased in comparison with the overall Cronbach's Alpha when it was eliminated. The homogeneity index (corrected item-total correlation) was also checked to discard items with coefficients lower than 0.4 (Shaffer et al., 2010). Therefore, items DIM4.1, DIM4.3, DIM4.5, DIM4.6 were eliminated for subsequent procedures. Table 4 shows the statistics for each item after eliminating the items with low reliability load.

Finally, in the context of this type of validity, Asencio et al. (2017) recommend verifying the unidimensionality of the instrument, i.e. the level of common variance between test items, by analysing the correlation between its different dimensions. Table 5 shows the correlation matrix between the instrument's latent factors after applying oblimin rotation, indicating that the factors are correlated. This result suggests that the instrument has a unidimensional structure, based on five latent factors.

Construct validity: exploratory factor analysis
After checking the comprehension validity, the unidimensionality of the instrument was assessed by means of the AFE. This type of validity was conducted under the Oblimin rotation and Principal Axes Factorisation method to account for most of the common variance. This approach is adequate even when the assumption of normality is not fully met (Fabrigar et al., 1999).
First, the items were tested for adequacy with respect to their factor membership to their latent factors with Bartlett's test of sphericity (Chi-square= 22279.671; gl=666; p. < 0.05) and the Kaiser-Meyer-Olkin index of sphericity (KMO= 0.960). Both coefficients were satisfactory. According to Cattell's (1966) Kaiser criterion, only factors with eigenvalues greater than 1 should be extracted; in this case, the first five latent factors met this criterion, as shown in Table 6. Thus, the scale created consists of five factors, explaining a total variance of 76.89%: DIM2 (47.13%), DIM5 (11.07%), DIM3 (7.32%), DIM4 (6.05%), and DIM1 (5.32%).

Table 7 shows that the highest percentage of true variance in the unidimensionality of the instrument was dimension 2 (Technological enjoyment in the responsible and safe use of AI), including items DIM2.4, DIM2.3, DIM2.10, DIM2.8, DIM2.5, DIM2.2, DIM2.7, DIM2.6, and DIM2.9. The second factor with the highest loading was dimension 5 (Responsible use of AI for employability as a future teacher or educator), including items DIM5_2, DIM5_8, DIM5_4, DIM5_7, DIM5_1, DIM5_6, DIM5_5, DIM5_9, DIM5_3. The third factor with the highest saturation was dimension 3 (Creativity to use AI responsibly as a future teacher or pedagogue), including items DIM3.4, DIM3.5, DIM3.7, DIM3.6, DIM3.3, DIM3.2 and DIM3.1. The fourth factor in order of saturation was dimension 4 (Anxiety to use AI in my future work context), with items DIM4.7, DIM4.2 and DIM4.4. The factor with the lowest percentage was dimension 1 (Attitude towards using AI for employability as a future teacher or educator), with items DIM1.6, DIM1.5, DIM1.4, DIM1.7, DIM1.3, DIM1.8, DIM1.2 and DIM1.1.

Construct validity: confirmatory factor analysis
After the EFA, the CFA was performed to verify the fit of the data by means of a structural equation model, with the purpose of evaluating the fit of the theoretical model identified in the EFA, according to the recommendations of Thompson (2004) and other authors with the creation of their psychometric instruments (Guillén Gámez et al., 2024; Soriano-Alcantara et al., 2024). To interpret the CFA indices, the recommendations of Bentler (1989) and Hu and Bentler (1999) were followed: minimum discrepancy/degrees of freedom (CMIN/DF), where values below 5 indicate a reasonable fit; root mean squared error of approximation (RMSEA), with values below 0.07 considered optimal; and the goodness-of-fit (GFI), comparative fit (CFI) and normed fit (NFI) indexes, considered adequate when they are equal to or above 0.9.
Different psychometric models were carried out in which the following items were eliminated until the model with the best psychometric properties was found: DIM1.1, DIM1.2, DIM1.8, DIM2.2, DIM2.3, DIM2.9, DIM3.1, DIM3.2, DIM5.1, DIM5.3, DIM5.4: CMIN/DF (2.487) with a highest lower than 5; RMSEA (.052), being lower than .07; GFI (.910), CFI (.973) and NFI (.955) with values higher than .90. Figure 1 presents the final factor model obtained from the CFA. This figure also shows the standardised correlation values derived from the CFA.

Convergent and discriminant validity
Once the construct validity was carried out under the EFA and CFA, two further types of validity were verified. On the one hand, convergent validity, which refers to the confidence that the items assessed measure the same latent factor (Cheung and Wang, 2017), using the average variance extracted (AVE) values which have to be greater than .50, as recommended by Hair et al. (2010). In addition, the square root value of the AVE on the diagonal should be greater than the correlations between factors (Hair et al., 2010). Table 8 shows that the AVE values exceed .50 and that the square roots of the AVEs (on the diagonal and in bold) are greater than the correlations between the latent factors. And, on the other hand, discriminant validity, which assesses to what extent a construct is truly different from other constructs within a research model, and for this purpose the MSV index (maximum shared variance squared) is used, whose requirement is that its value is lower than the AVE of each latent factor (Fornell and Larcker, 1981). According to the results in Table 8, the discriminant validity between the latent factors of the instrument is preserved.

Reliability analysis
Finally, the reliability of each latent factor of the instrument was calculated, as well as the overall internal consistency. For this purpose, Cronbach's alpha, Spearman-Brown, Guttman and composite reliability (CR) coefficients were used, taking into consideration that the recommended values should be higher than 0.7 (Nunally, 1978; Heinzl et al., 2011). The results obtained for the four indices were very satisfactory (Table 9), indicating that the internal consistency of the instrument is adequate.

CONCLUSIONS AND DISCUSSION
In the current educational context, marked by the rapid integration of AI and GenAI (Şahín, 2024), this study set out to develop and validate an instrument to measure the self-perceptions of trainee teachers and educationalists on the use of AI and its relevance in the labour market. Given the growing importance of this emerging technology in the training and professional environment, having a tool that assesses multiple dimensions of AI and GenAI use is essential (Ng et al., 2021), as educators will be able to take advantage of these tools to personalise learning for students, generate multimedia content such as images, videos or text, as well as analyse learning in real time (Ruiz-Rojas et al., 2023).
Most of the measurement instruments created so far have used general student samples (Hornberger et al., 2023; Marquina et al., 2024; Nazaretsky et al., 2022; Saz-Pérez et al., 2024; Ng et al., 2023; Grájeda et al, 2024; Chai et al., 2024; Üzüm et al., 2024; Morales-García et al., 2024; Yilmaz et al., 2023), but very few have been focused on teachers and pedagogues in training (Espinoza-San et al., 2024), giving added value to this study. This instrument allows measuring not only the ethical and responsible commitment to technology in terms of employability, but also the attitude of teachers and trainee teachers towards how the use of AI and GenAI could influence their job opportunities. In addition, it includes the assessment of factors such as technological enjoyment (digital flow), creativity in the use of AI and the degree of anxiety associated with its learning and application, which are key aspects for their professional development.
To create the psychometric instrument, the steps recommended by different studies carried out by Pérez and Carretero-Dios (2005), Guillén Gámez et al. (2024) or Soriano-Alcántara et al. (2024) were followed. For this purpose, an initial version of the instrument was developed with a total of 41 items divided into five latent factors. The items were created for a seven-point Likert scale. In the psychometric validation process, it was found that the sample was adequate, far exceeding the recommendation of Hair et al. (2010), by having several participants that multiplied by 10 the number of items (with an initial proportion of 13.37, and in the final version of the instrument of 21.92).
To ensure the validity of comprehension, items that did not meet the established ranges were eliminated, based on the skewness and kurtosis coefficients and the scale's discrimination index, as recommended by Meroño et al. (2018) and Pérez and Medrano (2010). In relation to construct validity, no items were discarded during the PFA as they all reached the minimum saturation loading of .40, following Cattell's (1966) recommendations. The process resulted in the five latent factors described above which explained 76.89% of the true variance in the participants' scores.
Several adjustments were made to the CFA with the elimination of the items with the worst saturation in their corresponding latent factors, until a good fit was identified, according to the criteria of Bentler (1989) and Hu and Bentler (1999). In this final version, the coefficients found for the CFI, NFI, RMSEA or CMIN indices were really satisfactory. In addition, the convergent and discriminant validity of this final version of the instrument was tested, based on the guidelines recommended and followed by Cheung and Wang (2017), Guillén Gámez et al. (2024) or Soriano-Alcantara et al. (2024). Specifically, we found satisfactory values for both the AVE index and the MSV index, as recommended by authors of great relevance in the scientific community such as Hair et al. (2010) and Fornell and Larcker (1981).
As for the reliability of the instrument, excellent psychometric properties were obtained as measured by Cronbach's Alpha index, both in the five latent factors that make up the instrument, as well as in the overall assessment of the instrument (α=.924). The other fit indices used to test the internal consistency of the instrument, such as Spearman-Brown, Guttman two-halves and composite reliability (CR), also supported the reliability of the instrument. The coefficients fell within the ranges recommended by Heinzl et al. (2011) and Nunally (1978).
After the different statistical analyses carried out, the final version of the instrument consisted of 25 items, classified into five factors. The first factor, entitled ‘Attitude towards the use of AI for employability as a future teacher or pedagogue’ was finally composed of five items; the second factor entitled ‘Digital flow in the responsible and safe use of AI’ was composed of a total of six items; the third factor was entitled ‘Creativity to use AI responsibly as a future teacher or pedagogue’ with a total of five items; the fourth factor was ‘Anxiety to use AI in my future work context’ with a total of three items; and finally, the fifth dimension was ‘Responsible use of AI for employability as a future teacher or pedagogue’ with a total of six items.
Although the results obtained show satisfactory reliability and validity, there are certain limitations that must be acknowledged. The sample used in this study, composed exclusively of trainee teachers in Spain and selected by non-probability sampling, represents a limitation that restricts the generalisability of the results to other cultural and educational contexts. The absence of random selection and the focus on a single country may influence the applicability of the findings to different educational realities. To mitigate this limitation, future studies could extend the sample by random and stratified sampling to include trainee teachers and educationalists from different educational levels and geographical regions, both within Spain and in other countries. Replication of the study in international contexts would also allow the validity of the instrument to be assessed in different educational and socio-cultural settings, which would help to increase its applicability and robustness. It would also be valuable to carry out longitudinal studies to analyse how the perception and use of the GenAI evolves over time and in line with progress in teacher education.
It would also be useful to adapt and validate the instrument in multicultural settings to conduct mixed studies involving both trainee teachers and pedagogues and management teams from educational and work institutions, to understand how the use of AI and GenAI affects teaching practice in the classroom and in work management. In other words, as future work, it is suggested to complement the quantitative approach with qualitative analyses that allow for a deeper understanding of participants' perceptions and experiences. The integration of interviews or focus groups could provide more detailed insights into the use of AI. In addition, the rapid evolution of this technology and the variability in the self-perception of AI and GenAI by the analysed group may require future adaptations of the instrument that has been created, to maintain its relevance and the pertinence of the items to the instrument itself.
Regarding the theoretical and practical implications of this scientific study, the creation of this instrument contributes to the field of education by providing a framework for assessing the responsible use of AI and GenAI and their relevance in the labour market. The creation of this instrument offers a valuable guide for the design of training programmes that integrate AI and GenAI in an ethical and effective way, analysing self-perception in beliefs, creativity, technological enjoyment (digital flow) or anxiety levels when having to learn the safe use of these emerging tools to include them in their future professional practice.
In addition, this instrument allows institutions to identify specific areas of training in AI and GenAI, to strengthen the employability of graduates not only in the educational context of trainee teachers at the Primary and Early Childhood Education stages, where many will work in schools, but also in other work and socio-community environments in which educationalists will work. In this way, AI and GenAI training will not only enhance the career prospects of this group but also expand their opportunities in sectors that value advanced technological skills, thus responding to the demands of a constantly evolving labour market.
Funding
This study was sponsored by the ERIA - UAM-Founderz-Microsoft Chair (Employability and Responsible Use of AI https://catedraeria.com) and funded by the project Resources for the Responsible Use of Artificial Intelligence (RAI) in Professional and Teaching Skills Development (Ref. FPYE_026.24_INN).
REFERENCES
Alemany Díaz, M. D. M., Vallés Lluch, A., Villanueva López, J. F., & García-Serra García, J. (2021). E-learning in "innovation, creativity and entrepreneurship": Exploring the new opportunities and challenges of technologies. Journal of Small Business Strategy (Online), 31(1), 39-50. https://doi.org/10.21125/inted.2020.0686
Alenezi, M. A. K., Mohamed, A. M., & Shaaban, T. S. (2023). Revolutionizing EFL special education: how ChatGPT is transforming the way teachers approach language learning. Innoeduca. International Journal of Technology and Educational Innovation, 9(2), 5-23. https://doi.org/10.24310/innoeduca.2023.v9i2.16774
Aler Tubella, A., Mora-Cantallops, M., & Nieves, J. C. (2024). How to teach responsible AI in Higher Education: challenges and opportunities. Ethics and Information Technology, 26(1), 1-14. https://doi.org/10.1007/s10676-023-09733-7
Asencio, E. N., García, E. J., Redondo, S. R., & Ruano, B. T. (2017). Fundamentos de la investigación y la innovación educativa. UNIR editorial.
Bellas, F., Guerreiro-Santalla, S., Naya, M., & Duro, R. J. (2023). AI curriculum for European high schools: An embedded intelligence approach. International Journal of Artificial Intelligence in Education, 33(2), 399-426. https://doi.org/10.1007/s40593-022-00315-0
Bentler, P. M. (1989). EQS structural equations program manual. BMDP Statistical Software.
Brandão, A., Pedro, L., & Zagalo, N. (2024). Teacher professional development for a future with generative artificial intelligence–an integrative literature review. Digital Education Review, (45), 151-157. https://doi.org/10.1344/der.2024.45.151-157
Cattell, R. B. (1966). The screen test for the number of factors. Multivariate Behavioral Research, 1(2), 245-276. https://doi.org/10.1207/s15327906mbr0102_10
Chai, C. S., Yu, D., King, R. B., & Zhou, Y. (2024). Development and validation of the Artificial Intelligence Learning Intention Scale (AILIS) for University Students. SAGE Open, 14(2), 21582440241242188. https://doi.org/10.1177/21582440241242188
Cheng, L., Umapathy, K., Rehman, M., Ritzhaupt, A., Antonyan, K., Shidfar, P., Nichols, J., Lee, M., & Abramowitz, B. (2023). Designing, developing, and validating a measure of undergraduate students’ conceptions of artificial intelligence in education. Journal of Interactive Learning Research, 34(2), 275-311. https://doi.org/10.1037/t93665-000
Cheung, G. W., & Wang, C. (2017). Current approaches for assessing convergent and discriminant validity with SEM: issues and solutions. Academy of Management Proceedings, 2017(1), 12706. https://doi.org/10.5465/AMBPP.2017.12706abstract
Chounta, I. A., Bardone, E., Raudsep, A., & Pedaste, M. (2022). Exploring teachers’ perceptions of artificial intelligence as a tool to support their practice in Estonian K-12 education. International Journal of Artificial Intelligence in Education, 32(3), 725-755. https://doi.org/10.1007/s40593-021-00243-5
Csikszentmihalyi, M., Csikszentmihalyi, M., Abuhamdeh, S., & Nakamura, J. (2014). Flow. Flow and the foundations of positive psychology: The Collected Works of Mihaly Csikszentmihalyi, 227-238. https://doi.org/10.1007/978-94-017-9088-8_15
Espinoza-San Juan, J., Raby, M. D., & Sagredo-Lillo, E. (2024). Validación de un cuestionario sobre las percepciones y usos de la IA-Gen entre estudiantes de pedagogía. Revista Ibérica de Sistemas e Tecnologias de Informação, (E70), 574-585.
European Commission. (2019). Directorate-General for communications networks content and technology: Ethics guidelines for trustworthy AI. Publications Office. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. https://doi.org/10.1037/1082-989X.4.3.272
Farrelly, T., & Baker, N. (2023). Generative artificial intelligence: Implications and considerations for higher education practice. Education Sciences, 13(11), 1109. https://doi.org/10.3390/educsci13111109
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50. https://doi.org/10.1177/002224378101800104
Gocen, A., & Aydemir, F. (2020). Artificial intelligence in education and schools. Research on Education and Media, 12(1), 13-21. https://doi.org/10.2478/rem-2020-0003
González-Mayorga, H., Rodríguez-Esteban, A., & Vidal, J. (2024). El uso del modelo GPT de OpenAIpara el análisis de textos abiertos en investigación educativa [Using OpenAI’s GPT Model to Analyse Open Texts in Educational Research]. Pixel-Bit. Revista de Medios y Educación, 69, 227-253. https://doi.org/10.12795/pixelbit.102032
Grájeda, A., Burgos, J., Córdova, P., & Sanjinés, A. (2024). Assessing student-perceived impact of using artificial intelligence tools: Construction of a synthetic index of application in higher education. Cogent Education, 11(1), 2287917. https://doi.org/10.1080/2331186X.2023.2287917
Guillén-Gámez, F. D., Ruiz-Palmero, J., & García, M. G. (2023). Digital competence of teachers in the use of ICT for research work: development of an instrument from a PLS-SEM approach. Education and Information Technologies, 28(12), 16509-16529. https://doi.org/10.1007/s10639-023-11895-2
Guillén-Gámez, F. D., Tomczyk, Ł., Colomo-Magaña, E., & Mascia, M. L. (2024). Digital competence of Higher Education teachers in research work: validation of an explanatory and confirmatory model. Journal of E-Learning and Knowledge Society, 20(3), 1-12. https://doi.org/10.20368/1971-8829/1135963
Hair Jr, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (pp. 785-785). Prentice Hall.
Hasanein, A. M., & Sobaih, A. E. E. (2023). Drivers and consequences of ChatGPT use in higher education: Key stakeholder perspectives. European Journal of Investigation in Health, Psychology and Education, 13(11), 2599-2614. https://doi.org/10.3390/ejihpe13110181
Heinzl, A., Buxmann, P., Wendt, O., & Weitzel, T. (Eds.). (2011). Theory-guided modeling and empiricism in information systems research. Springer Science & Business Media. https://doi.org/10.1007/978-3-7908-2781-1
Hopcan, S., Türkmen, G., & Polat, E. (2024). Exploring the artificial intelligence anxiety and machine learning attitudes of teacher candidates. Education and Information Technologies, 29(6), 7281-7301. https://doi.org/10.1007/s10639-023-12086-9
Hornberger, M., Bewersdorff, A., & Nerdel, C. (2023). What do university students know about Artificial Intelligence? Development and validation of an AI literacy test. Computers and Education: Artificial Intelligence, 5, 1-12. https://doi.org/10.1016/j.caeai.2023.100165
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. https://doi.org/10.1080/10705519909540118
Hwang, G. J., & Chen, N. S. (2023). Exploring the potential of generative artificial intelligence in education: applications, challenges, and future research directions. Journal of Educational Technology & Society, 26(2), 1-19. https://doi.org/10.30191/ETS.202304_26(2).0014
Jang, Y., Choi, S., & Kim, H. (2022). Development and validation of an instrument to measure undergraduate students’ attitudes toward the ethics of artificial intelligence (AT-EAI) and analysis of its difference by gender and experience of AI education. Education and Information Technologies, 27(8), 11635-11667. https://doi.org/10.1007/s10639-022-11086-5
Kaouni, M., Lakrami, F., & Labouidya, O. (2023). The design of an adaptive E-learning model based on Artificial Intelligence for enhancing online teaching. International Journal of Emerging Technologies in Learning (Online), 18(6), 202-219. https://doi.org/10.3991/ijet.v18i06.35839
Kim, S. W., & Lee, Y. (2022). The artificial intelligence literacy scale for middle school students. Journal of the Korea Society of Computer and Information, 27(3), 225-238. https://doi.org/10.9708/jksci.2022.27.03.225
Lambert, J., & Stevens, M. (2023). ChatGPT and generative AI technology: a mixed bag of concerns and new opportunities. Computers in the Schools, 1-25. https://doi.org/10.1080/07380569.2023.2256710
Marquina, M. C. G., Pinto-Villar, Y. M., Aranzamendi, J. A. M., & Gutiérrez, B. J. A. (2024). Adaptación y validación de un instrumento para medir las actitudes de los universitarios hacia la inteligencia artificial. Revista de Comunicación, 23(2), 125-142. https://doi.org/10.26441/RC23.2-2024-3493
Meroño, L., Calderón Luquin, A., Arias Estero, J. L., & Méndez Giménez, A. (2018). Diseño y validación del cuestionario de percepción del profesorado de Educación Primaria sobre el aprendizaje del alumnado basado en competencias (#ICOMpri2). Revista Complutense de Educación, 29(1), 215-235. https://doi.org/10.5209/RCED.52200
Mohamed, A. M., Shaaban, T. S., Bakry, S. H., Guillén-Gámez, F. D., & Strzelecki, A. (2024). Empowering the Faculty of Education Students: Applying AI’s Potential for Motivating and Enhancing Learning. Innovative Higher Education, 1-23. https://doi.org/10.1007/s10755-024-09747-z
Moneta, G. B., & Csikszentmihalyi, M. (1996). The effect of perceived challenges and skills on the quality of subjective experience. Journal of Personality, 64(2), 275-310. https://doi.org/10.1111/j.1467-6494.1996.tb00512.x
Morales-García, W. C., Sairitupa-Sanchez, L. Z., Morales-García, S. B., & Morales-García, M. (2024). Development and validation of a scale for dependence on artificial intelligence in university students. Frontiers in Education, 9, Article 1323898. https://doi.org/10.3389/feduc.2024.1323898
Nazaretsky, T., Cukurova, M., & Alexandron, G. (2022, March). An instrument for measuring teachers’ trust in AI-based educational technology. In LAK22: 12th International Learning Analytics and Knowledge Conference (pp. 56-66). https://doi.org/10.1145/3506860.3506866
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
Ng, D. T. K., Wu, W., Leung, J. K. L., & Chu, S. K. W. (2023). Artificial Intelligence (AI) literacy questionnaire with confirmatory factor analysis. In 2023 IEEE International Conference on Advanced Learning Technologies (ICALT) (pp. 233-235). IEEE. https://doi.org/10.1109/ICALT58122.2023.00074
Nunnally, J. C. (1978). An overview of psychological measurement. Clinical diagnosis of mental disorders: A handbook (pp. 97-146). https://doi.org/10.1007/978-1-4684-2490-4_4
Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review of empirical research from 2011 to 2020. Education and Information Technologies, 27(6), 7893-7925. https://doi.org/10.1007/s10639-022-10925-9
Pérez, C., & Carretero-Dios, H. (2005). Normas para el desarrollo y revisión de estudios instrumentales. International Journal of Clinical and Health Psychology, 5(3), 521-551.
Pérez, E. R., & Medrano, L. A. (2010). Análisis factorial exploratorio: bases conceptuales y metodológicas. Revista Argentina de Ciencias del Comportamiento (RACC), 2(1), 58-66.
Ruiz-Rojas, L. I., Acosta-Vargas, P., De-Moreta-Llovet, J., & Gonzalez-Rodriguez, M. (2023). Empowering education with generative artificial intelligence tools: Approach with an instructional design matrix. Sustainability, 15(15), 11524. https://doi.org/10.3390/su151511524
Şahín Kölemen, C. (2024). Artificial intelligence technologies and ethics in educational processes: solution suggestions and results. Innoeduca. International Journal of Technology and Educational Innovation, 10(2), 1-18. https://doi.org/10.24310/ijtei.102.2024.19806
Saz-Pérez, F., Mir, B. P., & Carrió, A. L. (2024). Validación y estructura factorial de un cuestionario TPACK en el contexto de Inteligencia Artificial Generativa (IAG). Hachetetepé: Revista Científica de Educación y Comunicación, (28), 1-14. https://doi.org/10.25267/Hachetetepe.2024.i28.1101
Shaffer, D. R., & Kipp, K. (2010). Developmental Psychology: Childhood and Adolescence. Wadsworth.
Soriano-Alcantara, J. M., Guillén-Gámez, F. D., & Ruiz-Palmero, J. (2024). Exploring Digital Competencies: Validation and Reliability of an Instrument for the Educational Community and for all Educational Stages. Technology, Knowledge and Learning, 1-20. https://doi.org/10.1007/s10758-024-09741-6
Thompson, B. (2004). Exploratory and confirmatory factor analysis: Understanding concepts and applications. American Psychological Association. https://doi.org/10.1037/10694-000
Tiwari, C. K., Bhat, M. A., Khan, S. T., Subramaniam, R., & Khan, M. A. I. (2024). What drives students toward ChatGPT? An investigation of the factors influencing adoption and usage of ChatGPT. Interactive Technology and Smart Education, 21(3), 333-355. https://doi.org/10.1108/ITSE-04-2023-0061
Üzüm, B., Elçiçek, M., & Pesen, A. (2024). Development of Teachers’ Perception Scale Regarding Artificial Intelligence Use in Education: Validity and Reliability Study. International Journal of Human–Computer Interaction, 1-12. https://doi.org/10.1080/10447318.2024.2385518
Uzumcu, O., & Acilmis, H. (2024). Do innovative teachers use AI-powered tools more interactively? a study in the context of diffusion of innovation theory. Technology, Knowledge and Learning, 29(2), 1109-1128. https://doi.org/10.1007/s10758-023-09687-1
Wang, F., Cheung, A. C., Chai, C. S., & Liu, J. (2024). Development and validation of the perceived interactivity of learner-AI interaction scale. Education and Information Technologies, 1-32. https://doi.org/10.1007/s10639-024-12963-x
Wang, Y. M., Wei, C. L., Lin, H. H., Wang, S. C., & Wang, Y. S. (2022). What drives students’ AI learning behavior: A perspective of AI anxiety. Interactive Learning Environments, 1-17. https://doi.org/10.1080/10494820.2022.2153147
Wang, Y. Y., & Wang, Y. S. (2022). Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interactive Learning Environments, 30(4), 619-634. https://doi.org/10.1080/10494820.2019.1674887
Yilmaz, F. G. K., Yilmaz, R., & Ceylan, M. (2023). Generative artificial intelligence acceptance scale: A validity and reliability study. International Journal of Human–Computer Interaction, 1-13. https://doi.org/10.1080/10447318.2023.2288730
Yu, H., & Guo, Y. (2023, June). Generative artificial intelligence empowers educational reform: current status, issues, and prospects. Frontiers in Education, 8, Article 1183162. https://doi.org/10.3389/feduc.2023.1183162
Zhan, Y., Qiu, Z., Li, X., & Zhao, Z. (2024). Ease of Use or Fun Perception? Factors Affecting Retention of Newly Registered Mobile Game Players Based on Flow Theory and The Technology Acceptance Model. Journal of Internet Technology, 25(4), 497-505. https://doi.org/10.70003/160792642024072504001
Información adicional
How to cite: Gómez-García, M., Ruiz-Palmero, J. Boumadan-Hamed, M., & Soto-Varela, R. (2025). Perceptions of future teachers and pedagogues on responsible AI. A measurement instrument. [Percepciones de futuros docentes y pedagogos sobre uso responsable de la IA. Un instrumento de medida]. RIED-Revista Iberoamericana de Educación a Distancia, 28(2), 105-130. https://doi.org/10.5944/ried.28.2.43288
Información adicional
redalyc-journal-id: 3314