Estudios e Investigaciones

Generative Artificial Intelligence agent in scientific research. An explanatory analysis of classroom learning

Agente de Inteligencia Artificial Generativa en investigación científica. Un análisis explicativo del aprendizaje en el aula

Roberto Berrios Zepeda
Universidad Nacional Autónoma de Nicaragua, UNAN, Nicaragua
Lorgia Márquez Mora
Universidad Nacional Autónoma de Nicaragua, UNAN, Nicaragua

Generative Artificial Intelligence agent in scientific research. An explanatory analysis of classroom learning

RIED-Revista Iberoamericana de Educación a Distancia, vol. 28, núm. 2, pp. 39-55, 2025

Asociación Iberoamericana de Educación Superior a Distancia

Recepción: 01 Diciembre 2024

Aprobación: 28 Febrero 2025

Publicación: 01 Julio 2025

Abstract: There is a lack of knowledge regarding the capacity, usefulness and effectiveness of some technological resources, such as intelligent agents with artificial intelligence in educational contexts for scientific research. This motivates the development and analysis of a new pedagogical strategy that uses generative intelligent agents with artificial intelligence in the construction of research projects. Therefore, the objective is to verify the effectiveness of a new pedagogical procedure and the design of activities that employ generative intelligent agents with artificial intelligence to enhance learning in scientific research. The method used was explanatory with a quasi-experimental longitudinal and prospective design. Four project steps and their respective hypotheses were established, instruments were developed and validated and applied to a sample of 111 study elements organized into one comparison group and two intervention groups. A repeated measures ANOVA analysis was conducted. Significant differences were demonstrated in the progress of the intervention groups compared to the comparison group in learning, research idea development by identifying research gaps and objectives; study formulation by identifying bibliographic references and study context; research design by determining the method and methodological procedure; and data analysis by interpreting descriptive-level data. The new methodology used and assisted by artificial intelligence yielded satisfactory overall results.

Keywords: Generative Artificial Intelligence Agents, learning, scientific research..

Resumen: Existe un desconocimiento de la capacidad, utilidad y efectividad de algunos recursos tecnológicos como los agentes inteligentes con inteligencia artificial en contextos formativos en investigación científica. Esto motiva al desarrollo y análisis de una nueva estrategia pedagógica que utilice agentes inteligentes generativos con inteligencia artificial en la construcción de proyectos de investigación. Por tanto, se pretende verificar la efectividad de un nuevo procedimiento pedagógico y el diseño de actividades que utilicen agentes inteligentes generativos con inteligencia artificial para la mejora del aprendizaje en investigación científica. El método utilizado fue explicativo con diseño cuasi experimental de corte longitudinal y prospectivo. Se establecieron cuatro pasos del proyecto y sus respectivas hipótesis, fueron construidos y validados los instrumentos, se aplicaron a una muestra de 111 elementos de estudio organizados en un grupo de comparación y dos grupos de intervención, se aplicó un análisis de ANOVA de medidas repetidas. Se demostraron las diferencias significativas del avance en los grupos de intervención y el grupo de comparación en el aprendizaje, Idea de investigación, identificando el vacío y propósito de investigación; Planteamiento del estudio, identificando referencias bibliográficas y contexto del estudio; Diseño de investigación, determinando el método y procedimiento metodológico y Análisis de datos, interpretando datos de nivel descriptivo. La nueva metodología utilizada y asistida por inteligencia artificial obtuvo resultados generales satisfactorios.

Palabras clave: agentes de Inteligencia Artificial Generativa, aprendizaje, investigación científica.

INTRODUCTION

According to IESALC-UNESCO (2020), the global public health crisis caused by the SARS-Cov-2 virus in 2019 heightened a series of challenges to the higher education system, the development of pedagogical measures to formatively evaluate student learning and increase the use and diversity of digital resources and ensure access to information anytime and anywhere. Other authors such as Kotler et al. (2021) agree that the health crisis and physical distancing measures pressured institutions to become more technological. This scenario includes developments and enhancements in computing power, open-source platforms, web connectivity, cloud storage capacity, mobile electronics, and big data, enabling the advancement of technologies designed to mimic human capabilities, such as machine intelligence, natural language processing, electronic sensors, mechanical automatons, augmented and virtual reality, the Internet of Things, and blockchain (Kotler et al., 2021; Liu et al., 2022)

According to Salmerón et al. (2023), Yang et al. (2021), and Alhayani et al. (2021), technological advancements and the application of new technologies in education and professional training are more often seen as specific actions rather than structured processes managed for educational improvement.

One of the most internationally significant fields of knowledge is machine intelligence (AI), although the scientific community has yet to reach a definitive consensus on its definition. Nevertheless, it is recognized as an interdisciplinary science with multiple approaches, particularly those focusing on human and rational thought and action, as well as its applications in perception, reasoning, and learning processes across various fields of knowledge (García-Peñalvo, 2023; DataScientest, 2023).

According to Sánchez (2023), there is a lack of understanding regarding the capacity and utility of certain technological resources, such as intelligent agents with artificial intelligence, which are both intriguing and motivating for education and professional performance. Additionally, the productivity of these tools in various professional training areas, such as scientific research, remains unexplored. This raises an important question: What is the effectiveness of a new pedagogical procedure that employs generative intelligent agents to enhance the learning of scientific research processes among undergraduate students?

According to Sánchez (2023), there is a positive attitude toward the use of ChatGPT in educational processes, as it strengthens adaptive learning, assists in writing, fosters the generation of novel ideas, and enhances research competencies. Other authors, such as González Sánchez et al. (2023), emphasize the need to understand the real impact of AI on meaningful knowledge generation. This context emphasizes the importance of analyzing the effects of new strategies that utilize generative intelligent agents to improve learning in scientific research, motivating the development of this study.

Litardo et al. (2023) argue that artificial intelligence can improve learning and adapt to students' preferences, potentially leading to increased engagement and academic performance.

This study aims to analyze strategies that optimize the use of technological resources in the development of research projects. Therefore, the objective of this research is to verify the effectiveness of a new pedagogical procedure that employs generative intelligent agents to improve the assimilation of research processes.

The following sections discuss topics related to machine intelligence (AI), its connection to higher education and scientific research, the methodology and procedures applied, the results and discussions, conclusions, and opportunities for further research.

Artificial Intelligence

AI has its roots in the 1950s, with pioneers such as Turing (1950) and McCarthy et al. (1955) laying the theoretical foundations. At this stage, concepts such as machine learning and symbolic logic were explored. Although some authors establish that artificial intelligence began in 1943 with the work of McCulloch and Pitts (1943), who presented for the first time a mathematical model for designing a neural network (DataScientest, 2023).

Defining machine intelligence is complicated because there are different approaches to its development (Nilsson, 1982; García-Peñalvo, 2023). For some authors, it can be considered an extension of computer science, the purpose of which is to develop machines that can perform actions that traditionally required human reasoning, codes activated by restrictions exposed by models that connect perception, thought and action or electronic resources that respond to human simulations with the capacity for observation, analysis and intention; the engineering of the creation of intelligent machines or computer programs.

In the process of improving artificial intelligence, a series of areas of interest are identified where machine intelligence can make a significant contribution. Some works in areas such as scientific research (Díaz, 2024), commercial research to optimize business processes and improve decision-making (Yu & Sup, 2021), and organizations that promote research in different sectors of society (UNESCO, 2021).

According to García-Peñalvo et al. (2024), there is an exponential growth of computing tools with intelligent features thanks to the popularity of large deep learning models or LLM (Gruetzemacher & Paradice, 2022), and especially to one of the generative pre-trained transformer models or GPT (Brown et al., 2020). This diversity of work in strategic areas of society allows us to recognize important functional and utilitarian advantages for the development of processes applied to the integral and sustainable development of various social fields.

Artificial Intelligence and Education

Machine intelligence in university education is a multifaceted field experiencing significant development.

According to Villarroel (2021), AI-based approaches are being integrated to enhance the efficiency of remote teaching and learning. In this context, UNESCO has set the challenge of promoting artificial intelligence (AI) technologies guided by the principles of equity and inclusion, aligned with the achievement of the Sustainable Development Goals (SDGs) through the Education 2030 agenda.

Therefore, some research studies, such as Jia et al. (2022), emphasize the importance of educational data analysis through the exploration and discovery of knowledge in educational databases to understand student behavior patterns and improve the management of the education system.

Likewise, García-Peñalvo (2020) and Lang et al. (2022) highlight the importance of learning analytics in determining learning styles and facilitating collaboration among students, contributing to a more dynamic and effective educational process. In this context, interest is growing in how AI contributes to learning through intelligent systems and content automation (Ma et al., 2014; Yilmaz et al., 2022), fostering a more active and autonomous learning experience.

According to Sari and Purwanta (2021), AI can enhance creative learning in the classroom. Other authors, such as García Rosado (2024), propose that using these intelligent tools helps build trust with students and fosters a person-centered pedagogical process where assessment is not a control mechanism but a learning process in itself (Rudolph et al., 2023). Therefore, there is increasing interest in utilizing artificial intelligence tools to improve the productivity of teaching and learning processes, allowing for student feedback and guidance (Baker, 2016; Zawacki-Richter et al., 2019; Villarroel, 2021).

Artificial Intelligence and Research

According to López Martín (2023), the use of machine intelligence can add value to the production, editing, and dissemination of manuscripts after their publication. Similarly, the work of Lalaleo et al. (2024) establishes that AI should be a tool that optimizes essential writing in the generation of scientific knowledge, in coordination with the instructor’s experience. Other studies, such as those by García Rosado (2024), identify challenges in characterizing and systematizing experiences in the development of didactic resources and theoretical-practical content related to AI in research methodology. The work of Vera (2023) states that machine intelligence enables the processing of large amounts of data and the identification of patterns and trends, facilitating knowledge generation and data-driven decision-making.

Few studies contribute to understanding how the use of intelligent tools enhances research projects. Part of the complexity of these processes lies in recognizing that research projects are built according to the objectives, variables, and study populations defined by the researcher. Efforts are needed to identify limitations or gaps in information within a research line, which can aid in correctly formulating the study title (Ayala, 2020). In this regard, Carvajal (2023) successfully applies a procedure to systematize, delimit, and refine a research topic using generative intelligence agents. Consequently, the following is proposed.

H1: There are significant differences in pre-test and post-test responses regarding research idea learning reported by participants, depending on whether they have received the new AI-based methodological procedure.

Understanding the complexity of a study’s context and correctly defining a problem to be solved is challenging. This becomes even more difficult when there is a lack of information and necessary tools to develop this stage of the research process. Some authors, such as Ayala (2020), emphasize that correctly defining the research problem is central to an investigation. Meanwhile, the work of Carvajal (2023) establishes a procedure for systematizing and identifying information to construct a portion of the problem statement, focusing on objectives, research questions, and possible hypotheses. Consequently, the following is proposed.

H2: There are significant differences in pre-test and post-test responses regarding research problem formulation learning reported by participants, depending on whether they have received the new AI-based methodological procedure.

There are limitations in understanding correct and appropriate research design protocols, which are associated with the taxonomy of concepts and empirical skills. The work of Carvajal (2023) systematizes the theory and procedures for constructing a research design assisted by generative intelligent agents. Consequently, the following is proposed.

H3: There are significant differences in pre-test and post-test responses regarding research design learning reported by participants, depending on whether they have received the new AI-based methodological procedure.

There are limitations in understanding the correct statistical analysis technique that strengthens the confidence and reliability of generated knowledge for application or replication. The work of Carvajal (2023) successfully extracts, synthesizes, and summarizes exploratory analysis information using GPT. Consequently, the following is proposed.

H4: There are significant differences in pre-test and post-test responses regarding data analysis learning reported by participants, depending on whether they have received the new AI-based methodological procedure.

MATERIALS AND METHODS

Method

This study follows an explanatory quasi-experimental design with a longitudinal and prospective intervention.

Participants

The units of analysis include all students enrolled in the undergraduate Market Research course, totaling 111 students. Two experimental groups were organized: Experimental Group 1 with 32 students, Experimental Group 2 with 31 students, and a control group comprising 48 students. The groups were assigned based on pre-existing enrollment records, which limited random assignment and increased the risk of biases due to external factors. However, the groups were homogeneous and demonstrated a similar level of academic competence.

Table 1
Distribution of groups by gender and age
AGEGEN
G. CONTROLG.EXP 1G. CONTROLG.EXP 1G. CONTROLG.EXP 1
Valid483231483231
Absent000000
Average19.87520.56319.8391.5211.4691.742
Standard Deviation1.1961.8831.3440.5050.5070.445
Source: study data

Instruments

We evaluated four steps in the scientific research process (see Figure 1). The first step, research idea, was assessed using a 17-item questionnaire with a reliability coefficient of 0.765 McDonald’s ω, considered acceptable. The second step, study approach, was measured using a 12-item questionnaire with a reliability coefficient of 0.81 McDonald’s ω, considered good. The third step, research design, was assessed using a 14-item questionnaire with a reliability coefficient of 0.80 McDonald’s ω, also considered good. The fourth step, data analysis, was measured using a 12-item questionnaire with a moderate reliability coefficient of 0.72 McDonald’s ω. This instrument was adapted from the competency-based curriculum planning for market research by Sandino et al. (2019).

A 1-to-5-point Likert scale was used, where 1 represented "Definitely No" and 5 represented "Definitely Yes."

Procedure

Four steps and actions of the scientific research process were proposed for development and analysis (see Figure 1).

Scientific research stages and systematization processes for generative intelligent agents
Figure 1
Scientific research stages and systematization processes for generative intelligent agents
Source: own elaboration adapted from UNESCO IESALC (2023) and Salmerón et al. (2023).

Each step began with a (pretest) administered in the classroom. The teaching methodology included lectures, followed by a methodological guide for developing the new procedure in the experimental groups. In contrast, the control group followed the traditional procedure, which consisted of lectures and independent group work outside the classroom. Each step lasted 15 days, and at the end of each step, students self-assessed using a (posttest) administered in the classroom.

The new procedure involved developing a guide consisting of an input or prompt with a professional perspective, as described by Morales-Chan (2023), and a language pattern that utilized topic, form, accentuation, and contextual details, following Dathathri et al. (2019) for each step. The output or result of the search was used to construct the research project. The technological resource employed was Perplexity AI®, a search engine for sources and citations with web links. The open-access model of Perplexity is based on OpenAI’s GPT-3.5®, combined with the company’s independent large deep learning model (LLM). Perplexity Pro has premium access to GPT-4® and Claude 3®.

Data analysis

The comparison of pretest and posttest responses for each research project step across the groups was conducted using two ANOVA tests for repeated measures. These tests were performed to examine differences between groups and to test the defined hypotheses. The data analysis was conducted using the cross-platform software Jeffreys’s Amazing Statistics Program (JASP 0.18.1.0)®.

RESULTS

The results for each methodological step and hypothesis testing are presented below.

Step 1. Research idea

Before beginning the analysis, the assumption was verified using Levene’s variance test, with pretest results of p 0.11 and posttest results of p 0.20, both greater than α 0.05, meeting the assumption of equal variance.

There are differences at a general level in the levels of learning, Research idea of ​​the groups, the differences are significant p < 0.001 less than α 0.05. The result shows that there are significant differences between the pretest and posttest scores, without separating the participants by control and experimental groups. In addition, the interaction between the learning variable Research idea and the groups is indicated, if the pretest and posttest differences are different depending on the group, we see the p value < 0.001 which is less than α 0.05, therefore, there are significant differences. The criterion that contributes most to the Research idea factor is to look for the gap in the line of research. 21% of the variability in the level, learning Research Idea is explained at the time of measurement (η² =0.21). See Tables 2 and 3.

There were significant differences in learning levels regarding the research idea across the groups, with p < 0.001, lower than α = 0.05. The results indicate significant differences between pretest and posttest scores, regardless of whether participants were in control or experimental groups. Additionally, an interaction was observed between the research idea learning variable and the groups, showing that pretest and posttest differences varied according to the group p < 0.001, less than α = 0.05, confirming significant differences. The most influential criterion in the research idea factor was identifying gaps in the research line. Approximately 21% of the variability in research idea learning was explained at the time of measurement (η² = 0.21). See Tables 2 and 3.

Table 2
Within-subject effects
CasesSum of SquaresglMiddle SquareFpη²
Research idea1931.99311931.993124.758< .0010.182
Research idea ✻GROUPS1854.6682927.33459.882< .0010.174
Residuals1842.82711915.486
Note: Squares Type III

Table 3
Between-subject effects
CasesSum of SquaresglMiddle SquareFpη²
GROUPS2270.88521135.44249.365< .0010.213
Residuals2737.13611923.001
Note: Sum of Squares Type III

Note: Sum of Squares Type III

Step 2. Study approach

Before starting the analysis, the assumption was verified through Levene's variance contrast, for the results of the pretest p 0.60 and posttest p 0.96, both greater than α 0.05, fulfilling the assumption of equal variance. There are differences at a general level in the levels of the Study Approach procedure in the groups, the differences are significant p < 0.001 less than α 0.05. The result shows that there are significant differences between the pretest and posttest scores, without separating the participants into control and experimental groups, that is, that the participants at a general level, regardless of the group, have higher values ​​in the posttest than the pretest. In addition, the interaction between the learning variable Study Approach and the groups is indicated, we see that the p value <0.001 is less than α 0.05, therefore, there are significant differences. The criterion that contributes most to the Study Approach factor is versatility in searching for information, determining the context and posing the problem of the study. 24.8% of the variability in the level of learning of the study approach is explained at the time of measurement (η² =0.248) See Tables 4 and 5.

Table 4
Within-subject effects
CasesSum of SquaresglMiddle SquareFpη²
Study approach2466.21212466.212153.207< .0010.217
Study approach ✻ GROUPS1320.9912660.49541.032< .0010.116
Residuals1818.98811316.097
Note: Sum of Squares Type III

Tabla 5
Between-subject effects
CasesSum of SquaresglMiddle SquareFpη²
GROUPS2821.05921410.52954.246< .0010.248
Residuals2938.29911326.003
Note: Sum of Squares Type III

Note: Sum of Squares Type III

Step 3. Research Design

The Levene equality of variance test is applied, the p value of 0.232 for the pretest and 0.089 for the posttest is above α 0.05, that is, the assumption of equality of variance between the groups is met.

There are differences at a general level in the levels of learning, Research Design in the groups, where the differences are significant p < 0.001 less than α 0.05. The result shows that there are significant differences between the pretest and posttest scores, without separating the participants into control and experimental groups. The p value is < 0.001 is less than α 0.05, it is significant, that is, the participants at a general level, regardless of the group, have higher values ​​in the posttest than in the pretest. In addition, the interaction between the variable learning research design and the groups is indicated. If the pretest and posttest differences are different depending on the group, we see the p value < 0.001 which is less than α 0.05, therefore, there are differences in values ​​between the responses. The criterion that contributes most to the Research Design factor is that the methodological procedure is dynamic and interactive in the search for scientific information to describe the method and procedure of the study. 1.3% of the variability in the learning level, Research Design is explained at the time of measurement (η² = 0.013) See Tables 6 and 7.

Table 6
Within-subject effects
CasesSum of SquaresglMiddle SquareFpη²
Research design PRE POS1657.36611657.366191.990< .0010.397
Research design PRE POS ✻ GRUPO779.3162389.65845.138< .0010.186
Residuals923.6841078.633
Note: Sum of Squares Type III

Table 7
Between-subject effects
CasesSum of SquaresglMiddle SquareFpη²
GROUPS52.483226.2413.6610.0290.013
Residuals767.0441077.169
Note: Sum of Squares Type III

Step 4. Data analysis

The Levene equality of variance test is applied, the p value of 0.073 for the pretest and 0.423 for the posttest is above α 0.05, that is, the assumption of equality of variance between the groups is met.

There are differences at a general level in the levels of the procedure, Data analysis in the groups, the differences are significant p <0.001 less than α 0.05. The result shows that there are significant differences between the pretest and posttest scores, without separating the participants into control and experimental groups. The p value is <0.001 is less than α 0.05, it is significant, that is, the participants at a general level, regardless of the group, have higher values ​​in the posttest than in the pretest. In addition, the interaction between the variable Data Analysis Learning and the groups is indicated. If the pretest and posttest differences are different depending on the group, we see the p value < 0.001 which is less than α 0.05, therefore, there are significant differences.

The criterion that contributes most to the Data Analysis factor is that the procedure enriches the search for scientific information to know the meaning and interpret the statistics. 34% of the variability in the Data Analysis Learning level is explained at the time of measurement (η² = 0.34) See Tables 8 and 9.

Table 8
Within-subject effects
CasesSum of SquaresglMiddle SquareFpη²
Response analysis Pre Postest111.1101111.11031.318< .0010.040
Response analysis Pre Postest ✻ Groups246.9122123.45634.798< .0010.089
Residuals379.6161073.548
Note: Sum of Squares Type III

Table 9
Between-subject effects
CasesSum of SquaresglMiddle SquareFpη²
GROUPS948.0822474.04146.127< .0010.340
Residuals1099.62710710.277
Note: Sum of Squares Type III

DISCUSSION AND CONCLUSIONS

In the process of the research idea, the effect among the participants shows that there are significant differences between the results of the first questionnaire applied and the subsequent questionnaire applied to the groups, confirming the research idea learning hypothesis. Additionally, the interaction between the research idea variable and the groups shows that there are significant differences, meaning that the progress made by the experimental groups between the pretest and posttest is significantly superior to the progress of the control group. It is confirmed that, in general, the new AI treatment of the experimental groups has been more effective for learning the research idea. Significant progress is observed in the new AI procedure in seeking information, clarifying the focus and purpose of the study, which helps to better define the study title in consideration of the advances of the comparison group that used the traditional procedure.

The new procedure contributes to improving some areas identified in the works of Aldana et al. (2020) and Bozkurt et al. (2023) by increasing productivity in project construction, diversified feedback with the teacher, and the quality of the project's structure and content from the beginning, contributing to the assimilation of research knowledge.

In the process of study design learning, the effect among the participants shows that there are significant differences between the results of the first questionnaire applied and the subsequent questionnaire applied to the groups, affirming the study design learning hypothesis. Additionally, the interaction between the study design variable and the groups shows that there are significant differences, meaning that the progress made by the experimental groups between the pretest and posttest is significantly superior to the progress of the control group. It is confirmed that, in general, the new AI treatment of the experimental groups has been more effective for learning. Significant advances are observed in the study design assisted by the new AI procedure, which helped diversify the theoretical review and optimize writing as mentioned by Lalaleo et al. (2024), describing the context, posing the problem, defining the purpose, and objectives of the study. However, the need to strengthen critical analysis, scientific writing, and reduce information bias in constructing the study context is identified, contributing to reducing omission due to methodological ignorance. A slight advance is obtained from the control group that used the traditional procedure related to feedback with the tutor teacher.

In the process of research design learning, the effect among the participants shows that there are significant differences between the results of the first questionnaire applied and the subsequent questionnaire applied to the groups, affirming the research design learning hypothesis. Additionally, the interaction between the research design variable and the groups shows that there are significant differences, meaning that the progress made by the experimental groups between the pretest and posttest is superior to the progress of the control group. It is confirmed that, in general, the new AI treatment of the experimental groups has been more effective for learning research design. Weak advances are observed in the research design assisted by the new AI procedure, mainly in seeking structured information and coherence between the method and methodological procedure, establishing the need to deepen the analysis to improve the quality of the project report according to the nature, purpose, and level of the study, characteristics of the programmatic methodology of scientific research (Supo & Zacarías, 2020).

In the process of data analysis learning, the effect among the participants shows that there are significant differences between the results of the first questionnaire applied and the subsequent questionnaire applied to the groups, affirming the data analysis learning hypothesis. Additionally, the interaction between the data analysis variable and the groups shows that there are significant differences, meaning that the progress made by the experimental groups between the pretest and posttest is significantly superior to the progress of the control group. It is confirmed that, in general, the new AI treatment of the experimental groups has been more effective for learning data analysis. Significant advances are observed in data analysis assisted by the new AI procedure, especially in univariate descriptive analysis and data reading in consideration of the advances of the comparison group that used the traditional procedure. The new procedure contributes to data reading through graphs and proposing coherent and accurate ideas to the nature, purpose, and level of the study, improving creative learning (Sari & Purwanta, 2021).

The new AI-assisted procedure obtained satisfactory results for hypothesis testing and its original contribution to scientific research through a modern design of activities and pedagogical methodology in the classroom.

There are limitations due to the non-random formation of groups, the use of self-reported data, selective memory of participants, tendency to respond positively, limited internet access, and biases in algorithm responses. Future research projects should incorporate discussions on final project reports, tutor feedback, and adapted peer assessment.

Exploring the potential of AI-powered automatic agents to assess critical understanding in a research-adapted learning context and define knowledge learning patterns would be an interesting avenue for future study.

REFERENCES

Aldana, G. M., Babativa, D. A., Caraballo, G. J., & Rey, C. A. (2020). Escala de actitudes hacia la investigación (EACIN): Evaluación de sus propiedades psicométricas en una muestra colombiana. Revista CES Psicología, 13(1), 89-103. https://doi.org/10.21615/cesp.13.1.6

Alhayani, B., Mohammed, H. J., Chaloob, I. Z., & Ahmed, J. S. (2021). Effectiveness of artificial intelligence techniques against cyber security risks apply of IT industry. Materials Today: Proceedings, 44, 3356-3361. https://doi.org/10.1016/j.matpr.2021.02.531

Ayala, O. (2020). Competencias informacionales y competencias investigativas en estudiantes universitarios. Revista Innova Educación, 2(4), 668-679. https://doi.org/10.35622/j.rie.2020.04.011

Baker, R. S. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600-614. https://doi.org/10.1007/s40593-016-0105-0

Bozkurt, A., Junhong, X., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M., Nerantzi, C., Honeychurch, S., Bali, M., Dron, J., Mir, K., Gray, B. C., Stewart, B. P., Costello, E., Mason, J., Stracke, C. M., Romero-Hall, E., ... Brooks, C. (2023). Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance Education, 18(1), 53-130. https://doi.org/10.5281/zenodo.7636568

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., ... Amodei, D. (2020). Language Models are Few-Shot Learners. arXiv. https://doi.org/10.48550/arXiv.2005.14165

Carvajal, I. C. M. (2023). Inteligencia artificial (IA) en la investigación científica: Sistematización y reflexiones sobre experiencias educativas. Revista EDUCARE - UPEL-IPB - Segunda Nueva Etapa 2.0, 27(3), 112-137. https://doi.org/10.46498/reduipb.v27i3.2050

DataScientest. (2023). Inteligencia artificial: definición, historia, usos y peligros. DataScientest. https://datascientest.com/es/inteligencia-artificial-definicion

Dathathri, S., Madotto, A., Lan, Z., Fung, P., & Neubig, G. (2019). Plug and play language models: A simple approach to controlled text generation. arXiv. https://arxiv.org/abs/1912.02164

Díaz, L. A. (2024). El uso de la inteligencia artificial en la investigación científica. Revista Historia de la Educación Latinoamericana, 26(43), 1-15. https://doi.org/10.19053/uptc.01227238.18014

García-Peñalvo, F. J. (2020). Learning Analytics as a Breakthrough in Educational Improvement. In D. Burgos (Ed.), Radical Solutions and Learning Analytics: Personalised Learning and Teaching Through Big Data (pp. 1-15). Springer Singapore. https://doi.org/10.1007/978-981-15-4526-9_1

García-Peñalvo, F. J. (2023). Redefiniendo la relación del profesorado con la inteligencia artificial. II Congreso Internacional de Educación. https://bit.ly/46Y8Y77

García-Peñalvo, F. J., Llorens-Largo, F., & Vidal, J. (2024). La nueva realidad de la educación ante los avances de la inteligencia artificial generativa. RIED. Revista Iberoamericana de Educación a Distancia, 27(1), 9-39. https://doi.org/10.5944/ried.27.1.37716

García Rosado, L. F. (2024). Inteligencia artificial, una mirada desde la asignatura de Metodología de la investigación científica: Relato de experiencia docente. Educación Superior, (37), 11-34. https://doi.org/10.56918/es.2024.i37.pp11-34

González Sánchez, J. L., Villota García, F. R., Moscoso Parra, A. E., Garces Calva, S. W., & Bazurto Arévalo, B. M. (2023). Aplicación de la Inteligencia Artificial en la Educación Superior. Dominio de las Ciencias, 9(3), 1097-1108. https://doi.org/10.23857/dc.v9i3.3488

Gruetzemacher, R., & Paradice, D. (2022). Deep Transfer Learning & Beyond: Transformer Language Models in Information Systems Research. ACM Computing Surveys, 54(10s), 1-30. https://doi.org/10.1145/3505245

Jia, K., Wang, P., Li, Y., Chen, Z., Jiang, X., Lin, C. L., & Chin, T. (2022). Research landscape of artificial intelligence and e-learning: A bibliometric research. Frontiers in Psychology, 13, 795039. https://doi.org/10.3389/fpsyg.2022.795039

Kotler, P., Kartajaya, H., & Setiawan, I. (2021). Marketing 5.0: Technology for Humanity. John Wiley & Sons.

Lalaleo, F. R., Carrera, F. A., & Martínez, A. P. (2024). La IA como herramienta de apoyo en la investigación científicas en los docentes investigadores del ISTE. Espíritu Emprendedor TES, 8(1), 97-110. https://doi.org/10.33970/eetes.v8.n1.2024.377

Lang, C., Siemens, G., Wise, A. F., Gašević, D., & Merceron, A. (Eds.). (2022). The Handbook of Learning Analytics. Society for Learning Analytics Research (SoLAR). https://doi.org/10.18608/hla22

Litardo, J. T., Wong, C. R., Ruiz, S. M., & Benites, K. P. (2023). Retos y oportunidades docente en la implementación de la inteligencia artificial en la educación superior ecuatoriana. South Florida Journal of Development, 4(2), 867-889. https://doi.org/10.46932/sfjdv4n2-020

Liu, Y., Chen, L., & Yao, Z. (2022). The application of artificial intelligence assistant to deep learning in teachers’ teaching and students’ learning processes. Frontiers in Psychology, 13, 929175. https://doi.org/10.3389/fpsyg.2022.929175

López Martín, E. (2023). El papel de la inteligencia artificial generativa en la publicación científica. Educación XXI, 27(1), 9-15. https://doi.org/10.5944/educxx1.39205

Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: A meta-analysis. Journal of Educational Psychology, 106(4), 901-918. https://doi.org/10.1037/a0037123

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine, 27(4), 12-14. https://doi.org/10.1609/aimag.v27i4.1904

McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115-133. https://doi.org/10.1007/bf02478259

Morales-Chan, M. (2023). Explorando el potencial de Chat GPT: una clasificación de Prompts efectivos para la enseñanza. Paper GES. https://biblioteca.galileo.edu/tesario/handle/123456789/1348

Nilsson, N. J. (1982). Artificial intelligence: Engineering, science, or slogan? AI Magazine, 3(1), 2-17. https://doi.org/10.1609/aimag.v3i1.359

Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 1-22. https://doi.org/10.37074/jalt.2023.6.1.9

Salmerón Moreira, Y. M., Luna Álvarez, H. E., Murillo Encarnación, W. G., & Pacheco Gómez, V. A. (2023). El futuro de la Inteligencia Artificial para la educación en las instituciones de Educación Superior. Revista Conrado, 19(93), 27-34.

Sánchez, O. V. G. (2023). Uso y percepción de ChatGPT en la educación superior. Revista de Investigación en Tecnologías de la Información, 11(23), 98-107. https://doi.org/10.36825/RITI.11.23.009

Sandino, M., Espinoza, M. J., & Berrios, R. (2019). Microprogramación del componente de investigación de mercados. Proyecto curricular para docencia. Universidad Nacional Autónoma de Nicaragua UNAN León.

Sari, J. M., & Purwanta, E. (2021). The Implementation of Artificial Intelligence in Creative Learning Based on Stem in the Era of Society 5.0. Tadris: Jurnal Keguruan dan Ilmu Tarbiyah, 6(2), 433-440. https://doi.org/10.24042/tadris.v6i2.10135

Supo, J., & Zacarías, H. (2020). Metodología de la investigación científica. Seminarios de investigación científica (3ra ed.). Sociedad Hispana de Investigación Científica, Editorial BIOESTADISTICO EEDU EIRL.

Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433-460. https://doi.org/10.1093/mind/lix.236.433

UNESCO IESALC. (2020). COVID-19 y educación superior: De los efectos inmediatos al día después; Análisis de impactos, respuestas políticas y recomendaciones. https://www.iesalc.unesco.org/wp-content/uploads/2020/05/COVID-19-ES-130520.pdf

UNESCO. (2021). El aporte de la inteligencia artificial y las TIC avanzadas a las sociedades del conocimiento: Una perspectiva de derechos, apertura, acceso y múltiples actores. https://unesdoc.unesco.org/ark:/48223/pf0000375796

UNESCO IESALC. (2023). ChatGPT e Inteligencia artificial en la educación superior. Guía de inicio rápida. https://www.iesalc.unesco.org/wp-content/uploads/2023/05/ChatGPT-e-Inteligencia-artificial-en-la-educacion-superior.-Gui%CC%81a-de-inicio-ra%CC%81pida.pdf

Vera, F. (2023). Integración de la Inteligencia Artificial en la Educación superior: Desafíos y oportunidades. Transformar, 4(1), 17-34. https://doi.org/10.56219/dialectica.v1i21.2322

Villarroel, J. J. G. (2021). Implicancia de la inteligencia artificial en las aulas virtuales para la educación superior. Orbis Tertius-UPAL, 5(10), 31-52. https://doi.org/10.59748/ot.v5i10.98

Yang, S. J., Ogata, H., Matsui, T., & Chen, N. S. (2021). Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers and Education: Artificial Intelligence, 2, 100008. https://doi.org/10.1016/j.caeai.2021.100008

Yilmaz, R., Yurdugül, H., Karaoğlan Yilmaz, F. G., Şahin, M., Sulak, S., Aydin, F., Tepgeç, M., Terzi Müftüoğlu, C., & Oral, Ö. (2022). Smart MOOC integrated with intelligent tutoring: A system architecture and framework model proposal. Computers and Education: Artificial Intelligence, 3, 100092. https://doi.org/10.1016/j.caeai.2022.100092

Yu, W., & Sup, W. (2021). Artificial intelligence for the development of university education management. Frontiers in Educational Research, 4(1), 120-125. https://doi.org/10.25236/FER.2021.040120

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1-27. https://doi.org/10.1186/s41239-019-0171-0

Información adicional

How to cite: Berrios Zepeda, R., & Márquez Mora, L. (2025). Generative Artificial Intelligence agent in scientific research. An explanatory analysis of classroom learning. [Agente de Inteligencia Artificial Generativa en investigación científica. Un análisis explicativo del aprendizaje en el aula]. RIED-Revista Iberoamericana de Educación a Distancia, 28(2), 39-55. https://doi.org/10.5944/ried.28.2.43545

Información adicional

redalyc-journal-id: 3314

HTML generado a partir de XML-JATS por