Estudios e Investigaciones

Challenges of generative Artificial Intelligence in higher education: promoting its critical use among students

Desafíos de la Inteligencia Artificial generativa en educación superior: fomentando su uso crítico en el estudiantado

Teresa Romeu Fontanillas
Universitat Oberta de Catalunya, UOC, España
Marc Romero Carbonell
Universitat Oberta de Catalunya, UOC, España
Montse Guitert Catasús
Universitat Oberta de Catalunya, UOC, España
Pablo Baztán Quemada
Universitat Oberta de Catalunya, UOC, España

Challenges of generative Artificial Intelligence in higher education: promoting its critical use among students

RIED-Revista Iberoamericana de Educación a Distancia, vol. 28, núm. 2, pp. 209-231, 2025

Asociación Iberoamericana de Educación Superior a Distancia

Recepción: 01 Diciembre 2024

Aprobación: 11 Marzo 2025

Publicación: 01 Julio 2025

Abstract: The widespread emergence of generative artificial intelligence (GenAI) presents significant challenges for its integration into Higher Education. Understanding AIG, its possibilities, and its risks must go hand in hand with a critical reflection on the regulatory and ethical issues it raises. It is essential for students to develop informed perspectives that allow them to use GenAI responsibly. This article presents the results of incorporating a specific learning resource on GenAI, along with a shared ethical discussion, into the curriculum of an online university course. The study analyzes both quantitative and qualitative data collected from over 900 university students through an online questionnaire featuring open and closed questions. Two groups were compared: one that did not have access to the learning resource and debate, and another that did. The research examines how this educational experience influences students' self-perceived knowledge of GenAI and explores the impact of other variables, such as the participants’ level of education and the type of studies they pursued. From the qualitative analysis, seven categories emerged, grouping the ethical principles that students should consider when engaging with GenAI. The findings demonstrate that specific training on GenAI improves students’ understanding, helping them critically assess its potential and challenges. Additionally, the results contribute to enhancing course components related to GenAI, fostering a more responsible and reflective approach to its use.

Keywords: artificial intelligence, university studies, ethics, critical sense.

Resumen: La irrupción generalizada de la Inteligencia Artificial generativa (IAG) plantea nuevos desafíos para su integración en la Educación Superior. El conocimiento de la IAG, de sus posibilidades y de sus riesgos, debe ir acompañado de la reflexión sobre los aspectos normativos y éticos que esta nueva tecnología suscita. Este artículo presenta los resultados de incluir, en el currículum de la asignatura de una universidad en línea, un recurso de aprendizaje sobre la IAG y una reflexión compartida sobre las implicaciones éticas de su uso. Se analizan los datos cuantitativos y cualitativos obtenidos de más de 900 estudiantes universitarios mediante un cuestionario en línea con preguntas abiertas y cerradas, distinguiendo dos grupos: uno que no tuvo acceso al recurso de aprendizaje y posterior debate y otro que sí. Se analiza comparativamente cómo influye la formación seguida en el conocimiento autopercibido y la influencia en este conocimiento de otras variables, como el nivel formativo de los participantes o la tipología de estudios cursados. Del análisis cualitativo han emergido siete categorías que aglutinan los principios éticos que el estudiantado debería tener en cuenta a la hora de utilizar la IAG. Los resultados demuestran cómo una formación específica sobre IAG mejora el conocimiento de esta, ayudando a discernir sobre sus potencialidades y aspectos críticos de forma informada y consciente, al tiempo que ayudan a la mejora de los elementos de la asignatura relativos a la IAG.

Palabras clave: inteligencia artificial, estudios universitarios, ética, pensamiento crítico.

INTRODUCTION

The emergence and rapid adoption of Generative Artificial Intelligence (GenAI) has revolutionised the creation of textual, visual, and audiovisual content. GenAI is defined as "a technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, text) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)" (Lim et al., 2023, p. 2), bringing both great promises and challenges. GenAI tools have drastically improved their capabilities compared to previous versions, enabling the completion of a wide variety of tasks without requiring specialized knowledge (Jarrahi et al., 2023), while also feeding on existing online information, including newly generated content.

GenAI enables anyone with internet access to interact with it, offering new ways to relate, consume, and share information, with implications across practically all fields, making it difficult to fully grasp the scope of this new reality.

Another significant threat is the compromise of security and privacy due to the flow of millions of users' data and knowledge (Ruschemeier, 2024; Shi, 2023), as well as potential bias or lack of accuracy in the information. As noted by Grassini (2023), responses from models such as ChatGPT may not always be accurate, especially in specialized fields. In addition, the generation of content that appears authentic but has no factual basis fosters uncertainty about the reliability of the information provided (Raman et al., 2024; Wach et al., 2023).

Moreover, GenAI amplifies inequalities associated with the digital divide (Farahani & Ghasemi, 2024; Wilmers, 2024), creating a new form of inequality and hindering the democratisation of knowledge (Pragya, 2024). Another concern is its negative environmental impact, as training and using large GenAI models require significant amounts of energy, increasing greenhouse gas emissions (Ponce del Castillo, 2024; Rane et al., 2024).

It is also crucial to consider how GenAI is transforming and will continue to transform professional fields. According to Zarifhonarvar (2024), this technology will have a significant impact on sectors such as industry, banking, technology, and social sciences. Consequently, it is already affecting the labor market by eliminating some professions and creating new ones. In this respect, “generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities” (Chui et al., 2023, p. 5). However, it also carries the risk of significantly increasing unemployment rates (Frey & Osborne, 2023).

This complex scenario underscores the fundamental importance of the ethical dimension of GenAI use, particularly in efforts to establish an appropriate regulatory framework, such as the new European regulation known as Regulation (EU) 2024/0138 on Artificial Intelligence, adopted by the European Parliament, which establishes coordinated standards on this matter.

In this context, education is emerging as one of the most sensitive areas in relation to the application of GenAI, raising critical questions about its transformation. Among the most pressing issues are the role of educators in this new landscape (Chan & Tsi, 2024) and the impact on students’ acquisition of competencies (Lim et al., 2023). UNESCO has been one of the key organisations to tackle the need for a framework to address these concerns, for example in the Beijing Consensus on AI and Education (UNESCO, 2019), which makes recommendations for adopting a human-centred approach to the use of AI in education. Similarly, the Recommendation on the Ethics of Artificial Intelligence (UNESCO, 2021a) provides a normative framework for dealing with critical aspects of GenAI, including education and research. Building on this, UNESCO published AI and education: Guidance for policy-makers (UNESCO, 2021b), which provides concrete policy recommendations for the appropriate use of AI in education. More recently, Guidance for generative AI in education and research (UNESCO, 2023) focused on the implications of GenAI in these fields.

Focusing on Higher Education (HE), as noted by García-Peñalvo et al. (2024), this technology also has a significant impact. Institutions, researchers, faculty, and university students face a paradigm shift (Huang et al., 2021), where the speed of change leaves little room for adaptation (Grassini, 2023; Hwang & Chen, 2023). This situation has led to a proliferation of studies such as that of Al Shloul et al. (2024), analyzing how universities are positioning themselves regarding tools like ChatGPT, or Wang and Lei (2024), examining the impact of GenAI on university online environments.

As highlighted by the Conference of Rectors of Spanish Universities (CRUE), "GenAI tools offer students the opportunity to receive more and better education. These tools can already play a role in earlier stages, helping to facilitate access to the university world" (Cruz Argudo et al., 2024, p. 7). However, GenAI poses significant challenges due to its ability to generate highly complex academic texts, potentially allowing students to complete assignments without developing competences such as inquiry, critical thinking, or the ability to distinguish between real and false information (Rahman & Watanobe, 2023).

All of this calls upon various actors in the educational system to seek new ways of working and establish new legal and ethical parameters.

Thus, the significant increase in research covers diverse topics such as university students' perceptions of its implications (Johnston et al., 2024), its impact on academic performance and motivation (Mohamed et al., 2024), or the acceptance of this technology by students and academic staff (Kanont et al., 2024; Shahzad et al., 2024). It also addresses possible negative effects, such as the proliferation of academic plagiarism (Cotton et al., 2024) or its impact on psychological processes and the regulation of emotions, such as anxiety resulting from its use (Kaya et al., 2024).

Building on the extensive existing body of research, this study aims primarily to explore students' knowledge of GenAI and GenAI-based tools at an online university, also considering ethical criteria and the main concerns its use raises for them.

Context

Since the adaptation to the EHEA in 2009, the Universitat Oberta de Catalunya (UOC) has been offering the cross-disciplinary subject "ICT Competencies" (CTIC), where students gradually and integrally acquire digital competence, which is fundamental at UOC as a fully online university. In this course, students develop a digital project in teams following the OCPBL model (Romero et al., 2024).

In the second semester of the 23/24 academic year, a learning resource on GenAI was incorporated, reinforcing a graded virtual debate on its implications in the academic field, which is the focus of this study. The collected data allowed for a comparison between the two semesters, as GenAI was not addressed in the first semester due to the absence of this resource and debate.

The resource design (Sebastià, 2024) was coordinated by the course faculty in response to the increasing use of GenAI for academic tasks. The CTIC course provides an ideal framework for introducing this knowledge, as it is offered across UOC's various degree programs, incorporating the ethical dimension of digital competence, promoting responsible use in learning activities, and addressing the challenges involved.

To this end, the resource is structured into the following sections:

  1. 1. What is generative artificial intelligence and what does it offer us?
  2. 2. GenAI tools
  3. 3. How to interact with AI?
  4. 4. Ethical guidelines for the use of AI
  5. 5. The most common uses of AI in learning

Given the nature of the activity, Section 4 of this resource, entitled “Ethical guidelines for the use of AI”, is of particular importance. This section is divided into two parts: one detailing the challenges posed by GenAI, such as disinformation, manipulation, and data security and privacy; and another entitled “Considerations for UOC students”, which provides guidelines for its use in academic settings. These include complying with relevant regulations, citing assignments where GenAI has been used, critically evaluating the information it provides, avoiding the input of personal, confidential or proprietary data, and planning its use to avoid dependency on the tool (Sebastià, 2024).

As mentioned above, and as an activity requiring a deep, analytical reading of the resource, students participated in a discussion based on the following fictional case of team collaboration:

A group of four students is working on a digital team project. They have divided the tasks fairly, with each member taking primary responsibility for one part of the project, while also validating the contribution of another member. Cristina is assigned to check Leo’s work and identifies significant problems with both the coherence of the text and the authenticity of the data and arguments presented. When she raises these concerns, Leo tells her that he completed his part using a generative AI tool that produced all the content for the activity. As the project guidelines explicitly warn against plagiarism and Cristina believes that this is against academic rules, she insists that Leo redo his part. She communicates this to both Leo and the rest of the team.

Based on this case study, students are asked a series of questions about Leo’s actions. They are encouraged to discuss these issues in the course forum, to propose possible solutions to the situation and to reflect together on how to use such tools without violating academic rules, while at the same time enhancing learning outcomes through activity completion.

Thus, in the ICT Competences course, GenAI is approached from a reflective perspective, coupled with a practical understanding of its application in academic activities.

METHODOLOGY

The main objective of this research is to explore the knowledge that HE students have about GenAI. To this end, the study is structured around three research questions:

A mixed-methods approach was employed, integrating qualitative and quantitative data collection and analysis (Creswell, 2014; Zhou et al., 2024).

Data were collected through an online survey designed by the researchers and administered to two different groups of students during the two semesters of the 2023/2024 academic year. The first semester group did not have access to the aforementioned resource and did not participate in a graded discussion activity, while the second group had access to the resource and participated in the discussion. The involvement of these two different groups allows for the comparative analysis presented in the following sections.

The survey consisted of closed multiple-choice questions, a closed 1-5 Likert scale question, and open-ended questions on various aspects of GenAI. Demographic questions were also included (highest level of education, field of study, age and gender). To ensure the empirical validity of the items, a content validation process was carried out by nine experts specialised in technology and education from the Edul@b research group.

The study population consists of students enrolled in the CTIC course, a transversal subject across UOC undergraduate programs, with 3.574 students enrolled in the first semester and 2.532 in the second semester of the 2023-24 academic year. Data collection was conducted at the end of both semesters, ensuring confidentiality and anonymity, using a Google Forms survey. The sample obtained reflects voluntary student participation and is representative of each of the degree programs in which the course is offered. It includes 21% of the first-semester population and 11% of the second-semester population, with a confidence level of 95% and a margin of error of 3,32% and 5,93%, respectively. Table 1 presents these data by study program.

Table 1
Population and sample data for the set of participants distributed by field of study
TotalE1E2E3E4E5
Law and Political ScienceEconomics and BusinessArts and HumanitiesPsychology and Education SciencesComputer Science and Telecommunications
1st semest.Population3242882891298738433
Sample68615721664146103
% pop.211824212024
2nd semest.Population2181539534227532349
Sample2434674313854
% pop.1191414715

The data were quantitatively analyzed using the open-source statistical software JASP.

From a qualitative research perspective, students were given the opportunity to express the ethical aspects they consider most important in the use of GenAI by responding to the open-ended question: "Mention three fundamental elements for using GenAI ethically and responsibly." This question allows for a deeper exploration of the third research question.

Using the open-source software QCAmap, responses were coded live for both groups (those who used the resource and those who did not) until no new categories emerged. Axial coding of the thirteen resulting categories was conducted by four researchers, following the criterion of conceptual similarity among them. The results were subsequently triangulated against the literature, leading to the final seven categories and their definitions (Table 3 in the Results section), which were used to complete the analysis of all responses (Bonilla-García & López-Suárez, 2016). Figure 1 presents an example of coding using QCAmap: the initial category Plagiarism was integrated during axial categorization into CAT 2 (Honesty and Intellectual Property).

Example of the coding process
Figure 1
Example of the coding process

Text: “However, it is also important to consider the instances of plagiarism that may arise from certain AIs, such as DaVinci, as over the course of a few...”

The same software was used to analyze the debates generated from the case study, through a targeted search for the categories that emerged from the questionnaire analysis, with some contributions cited to enrich the discussion.

RESULTS AND DISCUSSION

The results are structured according to the three research questions. Regarding the first question, What self-perceived knowledge do students have about GenAI?, we first analyse the participants’ perceived level of knowledge about GenAI, distinguishing between the two semesters, as this is an initial variable that could influence the results.

The survey asked the question “How familiar are you with generative artificial intelligence?”, with five response levels on a Likert scale, defining the variable Knowledge:

  1. 1. I have not heard of GenAI.
  2. 2. I have heard of it but do not understand it very well.
  3. 3. I have basic knowledge of GenAI.
  4. 4. I understand what it is and some of its applications.
  5. 5. I have in-depth knowledge of GenAI and a detailed understanding of how it works.

Significant differences (t = -10.922, p < 0.001) were observed between the group that had not used the learning resource (No-resource group, M = 3.12; SD = 1.01) and the group that had used it (Resource group, M = 3.57; SD = 0.85). These results place the first group at level 3 (I have basic knowledge of GenAI), while the second group is closer to level 4 (I understand what it is and some of its applications). To examine the relationship between these two variables in more detail, Figure 2 shows the results for each level of the Knowledge variable for the No-resource and Resource groups.

Results for each level of the Knowledge variable for the No-resource and Resource groups
Figure 2
Results for each level of the Knowledge variable for the No-resource and Resource groups

It is worth noting that 26,53% of the No Resource group report having never heard of GenAI or having heard of it but not understanding what it is (combining levels 1 and 2 of the scale), while the percentage drops to 1,24% in the Resource group. Thus, a quarter of students gain at least basic knowledge about GenAI with the use of the resource. On the other hand, those who accessed the resource and the debate rated them very positively, with scores of 3,96 and 4,01 out of 5, respectively.

The results for level 4 stand out due to the relative difference, increasing from 34,99% to 64,20% with the use of the resource, while the combined results for levels 4 and 5 show that the percentage of students who report greater knowledge increases from 40,38% to 76,55%. These results reinforce how specific training on GenAI proves effective in increasing knowledge about it (Ruiz Mendoza et al., 2024).

To further explore these results, and in response to the second research question, What variables influence this knowledge? Table 2 presents the average results obtained for the Knowledge variable in relation to the variables Education, Age, and Field of Study.

Table 2
Mean knowledge scores by education, age, and field of study
EducationAgeField of study
123451234512345
M3,323,213,553,413,923.093,293,313,323,47 3,243,33,523,053,64
SD0,941,010,920,950,971,040,960,990,990,91,010,920,930,990,91
Note: Education: 1-Secondary school, 2– Bachelor’s degree, 3– Master’s degree, 4– Postgraduate studies, 5– PhD. Age (years): 1 – under 20, 2 – 20 to 30, 3 – 30 to 40, 4 – 40 to 50, 5 – over 50. Field of study: 1 – Law and Political Science, 2 – Economics and Business, 3 – Arts and Humanities, 4 – Psychology and Education Sciences, 5 – Computer Science, Multimedia and Telecommunications.

For the variable Education, the mean knowledge scores are slightly lower for the groups with lower levels of education (secondary school and bachelor’s degree) than for those with higher levels of education (master’s degree, postgraduate studies and PhD). Specifically, the mean score for the former is 3,27, while the latter group has a mean score of 3,55. ANOVA tests indicate significant differences based on education, but post-hoc analysis (Tukey’s test = 0,004) limits these differences to comparisons between secondary school, with a mean of 3.21, and bachelor’s degree, with a mean of 3,55. This suggests that prior academic attainment is not a factor in explaining differences in knowledge of GenAI.

Regarding the variable Age, the most notable difference is between the group under 20, with a mean score of 3.09, and those over 50, with a mean score of 3,47. However, these differences are not statistically significant (t = -10,922 and p < 0,001).

For the Field of study variable (Table 2), ANOVA tests and post-hoc Tukey analyses reveal statistically significant differences between the following pairs of academic disciplines: 1 (Law and Political Science) and 5 (Computer Science, Multimedia and Telecommunications), with p = 0,005; 2 (Economics and Business) and 5 (Computer Science, Multimedia and Telecommunications), with p = 0,002; 3 (Arts and Humanities) and 4 (Psychology and Education Sciences), with p < 0,001; and 4 (Psychology and Education Sciences) and 5 (Computer Science, Multimedia and Telecommunications), with p < 0,001. These results highlight substantial variations in the Knowledge variable depending on students’ field of study.

Ranking the fields of study by their mean scores for knowledge of GenAI, 5 (Computer Science, Multimedia and Telecommunications) comes out on top, followed by 3 (Arts and Humanities), 1 (Law and Political Science), 2 (Economics and Business), and finally 4 (Psychology and Education Sciences). The prominence of Computer Science, Multimedia and Telecommunications is logical, given the students’ technical knowledge in relation to GenAI. The second position of Arts and Humanities may be attributed to the profile of its students, who are pursuing a second degree and are particularly concerned with the ethical and social dimensions raised by GenAI. The fact that Psychology and Education Sciences is in last place is notable and may reflect some resistance to the use of GenAI in the academic field of Psychology. This is consistent with Newell’s (2023) assertion that oral forms of learning may mitigate reliance on GenAI in academic contexts.

These results align with research such as that of Wang and Li (2024), who found significant differences in the impact of the intention to use GenAI across different academic disciplines. Similarly, Stöhr et al. (2024) also found significant differences in students' familiarity with GenAI tools based on their field of study.

The relationship between the Resource variable and knowledge of different GenAI tools is now analyzed. Figure 3 shows the percentage of responses for each tool in both groups (No Resource and Resource).

Knowledge of GenAI-based tools
Figure 3
Knowledge of GenAI-based tools

ChatGPT emerges as the most widely known tool, with almost 90% of participants in the No-resource group already familiar with it, rising to almost 100% among those who had used the resource. This result is not surprising in light of the scientific literature, which identifies ChatGPT as the most commonly used tool for academic tasks (Acosta-Enriquez et al., 2024; Al Shloul et al., 2024; Strzelecki, 2023).

These results confirm the usefulness of the resource in increasing knowledge of specific tools. Of note is the marked difference in knowledge of certain tools, such as Canva, a graphic design platform, which rose from 63,92% to 80,41%. This increase can be attributed to its use prior to the introduction of AI features, as Canva was already a familiar tool within the course, a point supported by studies such as Ruiz-Rojas et al. (2024), who highlight its impact on collaborative activities. Other tools mentioned more frequently were Perplexity, a natural language search engine, which increased from 4,47% to 18,78%, and Dall-E, a tool for generating images from text descriptions, which increased from 16,74% to 31,84%.

Having analysed some of the differences in students’ knowledge of GenAI, our focus now shifts to their perspectives on various ethical concerns related to the use of GenAI, which addresses the third research question: What are the key ethical considerations for students when using GenAI in higher education? To this end, students were asked to indicate which of the following critical concerns they found most worrying, with the option to select multiple responses:

Figure 4 shows the percentage of participants in the No-resource and Resource groups who selected these critical concerns.

Critical concerns around the use of GenAI
Figure 4
Critical concerns around the use of GenAI

The concerns that worry both groups the most, with minimal differences between them, are A (lack of knowledge about GenAI and the consequences of its use), E (disinformation and manipulation [e.g. deepfakes]) and C (the potential for GenAI to create misleading or offensive content). This finding can be attributed to the widespread social alarm regarding these issues (Łabuz & Nehring, 2024; Shoaib et al., 2023), which has also been addressed in the university context (Roe et al., 2024).

On the other hand, the two critical aspects with the most significant change in evaluation are B: personal data to GenAI tools, and D: The possibility of biased information being provided by GenAI. In both, there is an increase with the inclusion of the resource (8,90 and 9,34 points, respectively). The evaluation of aspect G: Uncertainty about the legality of its use is significantly lower in both groups, meaning that although the resource addresses legal challenges regarding its use, it has not had an impact on this aspect.

Looking more closely at the influence of resource usage and participation in the debate, Figure 5 presents the relationship between the variables Knowledge and Critical Aspects. As there are almost no students in levels 1 and 2 after using the resource (see Figure 2), the analysis focuses on the differences observed between the two groups (mean of the "Yes" group - mean of the "No" group) for levels 3, 4, and 5. For example, it is observed that for the group with a knowledge level of 3, using the resource leads to a decrease in their concern for critical aspect A by 4,6 points compared to those who had not used the resource, whereas in knowledge levels 4 and 5, concern increases by 2,3 and 12,8 points, respectively. This pattern is repeated in critical aspects B, E, and F, while for critical aspect G, the trend is reversed. This latter result may be explained by the fact that aspect G is formulated negatively. For aspect D, concern increases across all levels, meaning only participants at level 3 deviate from the previous pattern. Regarding aspect C (The potential for GenAI to create misleading or offensive content), a decrease in concern is observed for all knowledge levels, which can be attributed to the fact that, in this case, participants are addressed as AI users when generating their own content.

Variation in concern for critical aspects of GenAI
Figure 5
Variation in concern for critical aspects of GenAI

These results confirm that training in GenAI significantly influences how university students position themselves regarding the various critical aspects associated with its use. To understand the underlying causes of the changes in concern levels for each critical aspect, a more in-depth study would be necessary.

During the qualitative study, the categories presented in Table 3 emerged, along with their definitions and an example from the literature.

For example, in the category Critical analysis and cross-checking of information (CAT 1), participants mentioned the importance of "Verifying information by cross-checking it with other sources." Similarly, in the literature, as noted by Gallent-Torres et al. (2023), it is recognized that in order to use GenAI effectively, individuals must possess competencies that enable them to assess the quality of its outputs.

Table 3
Emerging categories for ethical and responsible use of GenAI
IDCategoryDefinition
CAT 1Critical analysis and cross-checking of informationCross-referencing multiple sources and critically analysing the information obtained. (Gallent-Torres et al., 2023)
CAT 2Honesty and intellectual propertyCiting the original source of information, including text and ideas, and being transparent about the use of GenAI. (Hadi et al., 2024)
CAT 3Data protection and securityAvoiding exposure of personal or third-party data and taking precautions to ensure security when using GenAI. (Kajiwara & Kawabata, 2024)
CAT 4Responsibility and fit for purposeUsing GenAI responsibly and avoiding misuse. (Jobin et al., 2019)
CAT 5Use of GenAI as a complementary toolEmploying GenAI as an assistive tool to complete tasks without replacing the learning process or human interaction. (Chan, 2023)
CAT 6Knowledge and awareness of its implicationsProviding training for students and educators on the use and effective integration of GenAI, including concepts, capabilities and ethical considerations, to promote awareness in a transparency-friendly environment. (Lee et al., 2024)
CAT 7RegulationCiting the original source of information, including text and ideas, and being transparent about the use of GenAI. (European Parliament, 2024)

Blank responses, those indicating a lack of knowledge and those not related to the question were also grouped together. A reduction of almost eight percentage points was observed between the two groups (26,5% for the No-resource group and 18,1% for the Resource group), suggesting that the inclusion of the resource and discussion activity in the ICT Skills course improved students’ knowledge of GenAI.

The results of analyzing the responses to the question "Mention three fundamental elements for using GenAI ethically and responsibly" using the defined categories are shown in Figure 6. The figure presents the percentage of times each category is mentioned relative to the total number of responses, distinguishing between participants who used the GenAI Resource and engaged in the debate activity and those who did not.

Percentage of participants mentioning the emerging categories related to the ethical and responsible use of GenAI
Figure 6
Percentage of participants mentioning the emerging categories related to the ethical and responsible use of GenAI

Among students who utilized the resource, a significant increase was observed in the percentage of three categories: CAT 1 (Critical analysis and cross-checking of information), CAT 2 (Honesty and intellectual property), and CAT 5 (Use of GenAI as a complementary tool). This finding aligns with the literature review conducted by Gallent-Torres et al. (2023), which examines publications related to pedagogical experiences aimed at enhancing the ethical use of AI in higher education (Kong et al., 2023). For the remaining categories, the differences fall within a narrow range of 0,4 – 3,7 percentage points.

In both groups, the most frequently mentioned categories are CAT 2 (Honesty and intellectual property) and CAT 1 (Critical analysis and cross-checking of information), while the least frequently mentioned are CAT 3 (Data protection and security), CAT 6 (Knowledge and awareness of its implications), and CAT 7 (Regulation). The third most mentioned category in the Yes group is CAT 5 (Use of GenAI as a complementary tool), whereas in the No group, it is CAT 4 (Responsibility and fit for purpose).

The following discussion presents the categories depicted in Figure 6, ranked from highest to lowest average rating, and supplemented with student testimonies derived from their contributions to the case study discussion, referenced as stu. 1, etc.

The most frequently mentioned category in both groups is CAT 2: Honesty and intellectual property (M Yes = 45,74%, M No = 28,41%, difference = 17,33 points). Plagiarism, defined as presenting someone else's work as one’s own without proper citation, is one of the primary concerns of faculty regarding the use of AI in higher education (Lee et al., 2024). Research on students’ engagement with plagiarism remains insufficient (Gallent-Torres et al., 2023), yet existing studies indicate its widespread nature and, to a lesser extent, students’ concerns about its consequences (Sullivan et al., 2023).

Strategies to minimize plagiarism in higher education include providing training (Cebrián-Robles et al., 2018) and the integration of ethical AI use into curricula (Kajiwara & Kawabata, 2024; Lim et al., 2023). The latter strategy was also highlighted in student debates at this university: "promoting a balanced approach that encourages ethical and responsible AI use, along with appropriate training for both students and faculty on how to use it effectively” (stu. 25).

The second most frequently mentioned category is CAT 1: Critical analysis and cross-checking of information (M Yes = 44,57%, M No = 27,58%, difference = 16,99 points). The nearly 17 point increase between those who used the resource and those who did not suggests that students were less aware of the proactive role required in AI use. This aspect was frequently discussed in debates, particularly in relation to critical thinking: "active learning and creativity among students should be fostered… through activities that require critical thinking” (stu. 54). Given that critical thinking is a key 21st-century competency (Martínez-Bravo et al., 2021), the challenges posed by the rise of AI further underscore its importance.

The third most frequently mentioned category in the Yes group, with a significant difference between the two groups, is CAT 5: Use of GenAI as a complementary tool (M Yes = 33,72%, M No = 19,64%, difference = 14,08 points). This category is closely related to the previous two, as illustrated by the following student statements: "it is essential that students understand that AI should be used as a complement, not as a replacement for their own critical and creative thinking” (stu. 23), and "I am the first to use AI tools to expand information, but only as supplementary material” (stu. 8). A positive perspective on this category emphasizes AI’s capacity to streamline various tasks, such as brainstorming, translations, and self-learning tools (Andión & Cárdenas, 2023).

The remaining categories, where differences between the two groups are minor, are briefly discussed below:

The relatively high rating of CAT 4: Responsibility and fit for purpose suggests a shared social awareness in both groups regarding the purpose of AI use and individual responsibility. This awareness does not appear to have been influenced by the resource or the debate but is explicitly mentioned in the course plan: "he misused AI; the course plan is clear on this matter, and he did not consider it” (stu. 2). Surprisingly, CAT 3: Data protection and security received low mention, despite the widespread societal concern regarding this issue.

The least mentioned categories, namely CAT 6 (Knowledge and awareness of its implications) and CAT 7 (Regulation), can be interpreted as external factors for users. While the case description and discussions included references to complying with existing regulations – “it is necessary to advance its use through laws and regulations” (stu. 11) – in the categorical analysis, “Regulation” refers to a demand for greater institutional control rather than an ethical commitment by users.

CONCLUSIONS

The results presented in this study indicate that the approach to GenAI in the subject significantly contributes to increasing students' knowledge. Both the educational resource and the reflective debate activity based on a case study are validated as effective pedagogical strategies.

Regarding the variables that influence students' knowledge of GenAI, age and prior educational level do not have a statistically significant impact. However, the field of study does play a relevant role, with Computer-related engineering and humanities standing out compared to fields such as Psychology and Education. These findings underscore the need to tailor the pedagogical approach to GenAI for specific disciplines, particularly those less familiar with this technology, to ensure that all students are aware of both its potential and risks in academic settings.

The training implemented has generally increased students' awareness and knowledge regarding the critical aspects of using GenAI in an academic context. This is particularly evident in areas such as intellectual honesty and responsible use of these tools. However, students did not express significant concern about the legal challenges associated with GenAI use.

Based on these findings and from a practical application perspective, a redesign of course activities is necessary. First, there is a need to revise the case study used for debate by explicitly incorporating a critical legal aspect. Second, a collaborative analysis activity should be developed to examine the terms and conditions of GenAI applications, encouraging reflection on their legal implications (e.g., data management). Third, to promote structured use of GenAI, students will be required to conduct a portion of their research using GenAI tools, allowing them to reflect on how it affects the academic production process. It is also important to note that these three proposals will be implemented in the next semester.

The main limitations of this study include potential sample bias, as participation was voluntary, and an imbalance in the sample-to-population ratio between students who accessed the resource and those who did not. However, these limitations are mitigated by the large number of responses. Additionally, the study was conducted within a specific context, meaning its generalization would require similar studies in other settings, such as secondary education or traditional in-person universities.

Regarding future research directions, it would be valuable to explore disciplinary differences in greater depth, as well as to analyze the factors influencing students' perceptions of critical and ethical aspects of GenAI use in academia. Additionally, investigating how the emergence of these tools affects students' acquisition of digital competencies would be relevant, incorporating as research instruments, objective assessment to measure their learning outcomes.

Another promising research avenue would be to examine faculty perceptions of the course, identifying their concerns and enhancing the quality of their teaching practices.

In conclusion, given the short period since the widespread adoption of GenAI, these results contribute to advancing a new framework that supports the evolution of teaching and learning regarding GenAI and its implications, through a validated methodological model within an online university course.

REFERENCES

Acosta-Enriquez, B. G., Arbulú Ballesteros, M. A., Huamaní Jordan, O., López Roca, C., & Saavedra Tirado, K. (2024). Analysis of college students' attitudes toward the use of ChatGPT in their academic activities: effect of intent to use, verification of information and responsible use. BMC Psychology, 12(1), 1-18. https://doi.org/10.1186/s40359-024-01764-z

Al Shloul, T., Mazhar, T., Abbas, Q., Iqbal, M., Ghadi, Y. Y., Shahzad, T., Mallek, F., & Hamam, H. (2024). Role of activity-based learning and ChatGPT on students' performance in education. Computers and Education: Artificial Intelligence, 6, 100219. https://doi.org/10.1016/j.caeai.2024.100219

Andión, M., & Cárdenas, D. I. (2023). Convivir con inteligencias artificiales en la educación superior: Retos y estrategias. Perfiles Educativos, 45(Especial), 56-69. https://doi.org/10.22201/iisue.24486167e.2023.Especial.61691

Bonilla-García, M. Á., & López-Suárez, A. D. (2016). Ejemplificación del proceso metodológico de la teoría fundamentada. Cinta de Moebio, (57), 305-315. https://doi.org/10.4067/S0717-554X2016000300006

Cebrián-Robles, V., Raposo-Rivas, M., Cebrián-de-la-Serna, M., & Sarmiento-Campos, J. A. (2018). Percepción sobre el plagio académico de estudiantes universitarios españoles. Educación XX1, 21(2), 105-129. https://doi.org/10.5944/educxx1.20062

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 1-25. https://doi.org/10.1186/s41239-023-00408-3

Chan, C. K. Y., & Tsi, L. H. Y. (2024). Will generative AI replace teachers in higher education? A study of teacher and student perceptions. Studies in Educational Evaluation, 83, 101395. https://doi.org/10.1016/j.stueduc.2024.101395

Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A. Yee, L., & Zemmel, R. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey Global Institute. http://dln.jaipuria.ac.in:8080/jspui/bitstream/123456789/14313/1/The-economic-potential-of-generative-ai-the-next-productivity-frontier.pdf

Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228-239. https://doi.org/10.1080/14703297.2023.2190148

Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). SAGE Publications.

Cruz Argudo, F., García Varea, I., Martínez Carrascal, J. A., Ruiz Martínez, A., Ruiz Martínez, P. M., Sánchez Campos, A., & Turró Ribalta, C. (2024). La Inteligencia Artificial Generativa en la docencia universitaria: Oportunidades, desafíos y recomendaciones. Crue Universidades Españolas. https://www.crue.org/wp-content/uploads/2024/03/Crue-Digitalizacion_IA-Generativa.pdf

European Parliament. (2024). Reglamento de Inteligencia Artificial (2024/0138(COD)). https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_ES.pdf

Farahani, M., & Ghasemi, G. (2024). Artificial Intelligence and Inequality: Challenges and Opportunities. International Journal of Innovation in Education, 9, 78-99. https://doi.org/10.32388/7hwuz2

Frey, C. B., & Osborne, M. (2023). Generative AI and the Future of Work: A Reappraisal. The Brown Journal of World Affairs, XXX(1). https://bjwa.brown.edu/30-1/generative-ai-and-the-future-of-work-a-reappraisal/

Gallent-Torres, C., Zapata-González, A., & Ortego-Hernando, J. L. (2023). El impacto de la inteligencia artificial generativa en la educación superior: una mirada desde la ética y la integridad académica. RELIEVE, 29(2), art. M5. https://doi.org/10.30827/relieve.v29i2.29134

García-Peñalvo, F. J., Llorens-Largo, F., & Vidal, J. (2024). La nueva realidad de la educación ante los avances de la inteligencia artificial generativa. RIED-Revista Iberoamericana de Educación a Distancia, 27(1), 9-39. https://doi.org/10.5944/ried.27.1.37716

Grassini, S. (2023). Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Education Sciences, 13(7), 692. https://doi.org/10.3390/educsci13070692

Hadi, M. U., Tashi, Qasem Al, Qureshi, R., Shah, A., Muneer, Amgad, Irfan, M., Zafar, A., Shaikh, M. B., Akhtar, N., Wu, J., & Mirjalili, S. (2024). A Survey on Large Language Models: Applications, Challenges, Limitations, and Practical Usage [Preprint]. TechRxiv. https://doi.org/10.36227/techrxiv.23589741.v1

Huang, J., Shen, G., & Xiping, R. (2021). Connotation Analysis and Paradigm Shift of Teaching Design under Artificial Intelligence Technology. International Journal of Emerging Technologies in Learning, 16(5), 73-86. https://doi.org/10.3991/ijet.v16i05.20287

Hwang, G. J., & Chen, N. S. (2023). Editorial Position Paper: Exploring the Potential of Generative Artificial Intelligence in Education: Applications, Challenges, and Future Research Directions. Educational Technology and Society, 26(2). https://doi.org/10.30191/ETS.202304_26(2).0014

Jarrahi, M. H., Askay, D., Eshraghi, A., & Smith, P. (2023). Artificial intelligence and knowledge management: A partnership between human and AI. Business Horizons, 66(1), 87-99. https://doi.org/10.1016/j.bushor.2022.03.002

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389-399. https://doi.org/10.1038/s42256-019-0088-2

Johnston, H., Wells, R. F., Shanks, E. M., Boey, T., & Parsons, B. N. (2024). Student perspectives on the use of generative artificial intelligence technologies in higher education. International Journal for Educational Integrity, 20(1), 1-21. https://doi.org/10.1007/s40979-024-00149-4

Kajiwara, Y., & Kawabata, K. (2024). AI literacy for ethical use of chatbot: Will students accept AI ethics? Computers and Education: Artificial Intelligence, 6, 100251. https://doi.org/10.1016/j.caeai.2024.100251

Kanont, K., Pingmuang, P., Simasathien, T., Wisnuwong, S., Wiwatsiripong, B., Poonpirome, K., Songkram, N., & Khlaisang, J. (2024). Generative-AI, a Learning Assistant? Factors Influencing Higher-Ed Students' Technology Acceptance. Electronic Journal of E-Learning, 22(6 Special Issue), 18-33. https://doi.org/10.34190/ejel.22.6.3196

Kaya, F., Aydin, F., Schepman, A., Rodway, P. yetişensoy, O., & Demir Kaya, M. (2024). The Roles of Personality Traits, AI Anxiety, and Demographic Factors in Attitudes toward Artificial Intelligence. International Journal of Human-Computer Interaction, 40(2), 497-514. https://doi.org/10.1080/10447318.2022.2151730

Kong, S.-C., Cheung, W. M.-Y., & Zhang, G. (2023). Evaluating an Artificial Intelligence Literacy Programme for Developing University Students' Conceptual Understanding, Literacy, Empowerment and Ethical Awareness. Educational Technology & Society, 26(1), 16-30. https://doi.org/10.30191/ETS.202301_26(1).0002

Łabuz, M., & Nehring, C. (2024). Information apocalypse or overblown fears-what AI mis- and disinformation is all about? Shifting away from technology toward human reactions. Politics and Policy, 52(4), 874-891. https://doi.org/10.1111/polp.12617

Lee, D., Arnold, M., Srivastava, A., Plastow, K., Strelan, P., Ploeckl, F., & Palmer, E. (2024). The impact of generative AI on higher education learning and teaching: A study of educators' perspectives. Computers and Education: Artificial Intelligence, 6, 100221. https://doi.org/10.1016/j.caeai.2024.100221

Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 1-13. https://doi.org/10.1016/j.ijme.2023.100790

Martínez-Bravo, M. C., Sádaba Chalezquer, C., & Serrano-Puche, J. (2021). Meta-marco de la alfabetización digital: análisis comparado de marcos de competencias del Siglo XXI. Revista Latina de Comunicación Social, 79, 76-110. https://doi.org/10.4185/RLCS-2021-1508

Mohamed, A. M., Shaaban, T. S., Bakry, S. H., Guillén-Gámez, F. D., & Strzelecki, A. (2024). Empowering the Faculty of Education Students: Applying AI's Potential for Motivating and Enhancing Learning. Innovative Higher Education, 1-23. https://doi.org/10.1007/s10755-024-09747-z

Newell, S. J. (2023). Employing the interactive oral to mitigate threats to academic integrity from ChatGPT. Scholarship of Teaching and Learning in Psychology. https://doi.org/10.1037/stl0000371

Ponce del Castillo, A. (2024). Exposing generative AI: Human-Dependent, Legally Uncertain, Environmentally Unsustainable [Preprint]. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4975411

Pragya, A. (2024). Generative AI and epistemic diversity of its inputs and outputs: call for further scrutiny. AI and Society, 1-2. https://doi.org/10.1007/s00146-024-02097-6

Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for Education and Research: Opportunities, Threats, and Strategies. Applied Sciences, 13(9), 5783. https://doi.org/10.3390/app13095783

Raman, R., Kumar Nair, V., Nedungadi, P., Kumar Sahu, A., Kowalski, R., Ramanathan, S., & Achuthan, K. (2024). Fake news research trends, linkages to generative artificial intelligence and sustainable development goals. Heliyon, 10(3), e24727. https://doi.org/10.1016/j.heliyon.2024.e24727

Rane, N., Choudhary, S., & Rane, J. (2024). Contribution of ChatGPT and Similar Generative Artificial Intelligence for Enhanced Climate Change Mitigation Strategies [Preprint]. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4681720

Roe, J., Perkins, M., & Furze, L. (2024). Deepfakes and Higher Education: A Research Agenda and Scoping Review of Synthetic Media. Journal of University Teaching and Learning Practice, 1-22. https://doi.org/10.53761/2y2np178

Romero Carbonell, M., Romeu Fontanillas, T., Guitert Catasús, M., & Baztán Quemada, P. (2024). Validation of the OCPBL model for online collaborative project-based learning [Validación del modelo ABPCL para el aprendizaje basado en proyectos colaborativos en línea]. RIED-Revista Iberoamericana de Educación a Distancia, 27(2), 159-181. https://doi.org/10.5944/ried.27.2.39120

Ruiz Mendoza, K. K., Miramontes Arteaga, M. A., & Reyna García, C. (2024). Percepciones y expectativas de estudiantes universitarios sobre la IAG. European Public & Social Innovation Review, 9, 1-21. https://doi.org/10.31637/epsir-2024-357

Ruiz-Rojas, L. I., Salvador-Ullauri, L., & Acosta-Vargas, P. (2024). Collaborative Working and Critical Thinking: Adoption of Generative Artificial Intelligence Tools in Higher Education. Sustainability, 16(13), 5367. https://doi.org/10.3390/su16135367

Ruschemeier, H. (2024). Generative AI and Data Protection. In C. Poncibo, M. Ebers, R. Calo, & M. Zou (Eds.), Handbook on Generative AI and the Law (pp. 1-18). Cambridge University Press. https://doi.org/10.1017/cfl.2024.2

Sebastià, I. (2024). Inteligencia artificial generativa. Universitat Oberta de Catalunya. https://iag.recursos.uoc.edu/es/

Shahzad, M. F., Xu, S., & Asif, M. (2024). Factors affecting generative artificial intelligence, such as ChatGPT, use in higher education: An application of technology acceptance model. British Educational Research Journal, 1-25. https://doi.org/10.1002/berj.4084

Shi, Y. (2023). Study on security risks and legal regulations of generative artificial intelligence. Science of Law Journal, 2(11), 17-23. https://doi.org/10.23977/law.2023.021104

Shoaib, M. R., Wang, Z., Ahvanooey, M. T., & Zhao, J. (2023). Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models. ICCA 2023 - 2023 5th International Conference on Computer and Applications, Proceedings. https://doi.org/10.1109/ICCA59364.2023.10401723

Stöhr, C., Ou, A. W., & Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education: Artificial Intelligence, 7, 100259. https://doi.org/10.1016/j.caeai.2024.100259

Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students' acceptance and use of technology. Interactive Learning Environments, 32(9), 5142-5155. https://doi.org/10.1080/10494820.2023.2209881

Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching, 6(1), 1-10. https://doi.org/10.37074/jalt.2023.6.1.17

UNESCO. (2019). Beijing Consensus on Artificial Intelligence and Education. https://unesdoc.unesco.org/ark:/48223/pf0000368303

UNESCO. (2021a). Ethics of Artificial Intelligence: The Recommendation. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics?hub=32618

UNESCO. (2021b). Inteligencia artificial y educación: guía para las personas a cargo de formular políticas. https://unesdoc.unesco.org/ark:/48223/pf0000379376

UNESCO. (2023). Guidance for generative AI in education and research. https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research

Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 7-30. https://doi.org/10.15678/EBER.2023.110201

Wang, L., & Li, W. (2024). The Impact of AI Usage on University Students' Willingness for Autonomous Learning. Behavioral Sciences, 14(10), 956. https://doi.org/10.3390/bs14100956

Wang, X., & Lei, L. (2024). A Path Study of Generative Artificial Intelligence Enabling Online Education Platforms in Colleges and Universities. Proceedings of the 2024 International Symposium on Artificial Intelligence for Education, 332-338. https://doi.org/10.1145/3700297.3700354

Wilmers, N. (2024). Generative AI and the Future of Inequality. An MIT Exploration of Generative AI. MIT. https://doi.org/10.21428/e4baedd9.777b7123

Zarifhonarvar, A. (2024). Economics of ChatGPT: a labor market view on the occupational impact of artificial intelligence. Journal of Electronic Business & Digital Economics, 3(2), 100-116. https://doi.org/10.1108/jebde-10-2023-0021

Zhou, Y., Zhou, Y., & Machtmes, K. (2024). Mixed methods integration strategies used in education: A systematic review. Methodological Innovations, 17(1), 41-49. https://doi.org/10.1177/20597991231217937

Información adicional

How to cite: Romeu Fontanillas, T., Romero Carbonell, M., Guitert Catasús, M., & Baztán Quemada, P. (2025). Challenges of generative artificial intelligence in higher education: promoting its critical use among students [Desafíos de la inteligencia artificial generativa en educación superior: fomentando su uso crítico en el estudiantado]. RIED-Revista Iberoamericana de Educación a Distancia, 28(2), 209-231. https://doi.org/10.5944/ried.28.2.43535

Información adicional

redalyc-journal-id: 3314

HTML generado a partir de XML-JATS por