Abstract: This scoping review aims to analyze the ethical implications of Artificial Intelligence (AI) in nursing care by identifying its associated risks and opportunities. To achieve this, a literature review was conducted, examining research published between 2018 and 2024. Specifically, the analysis focused on AI applications and models implemented in diverse healthcare contexts, with particular attention to nursing. The results show that AI poses several risks, including increased inequality, technology dependence, algorithmic bias, dehumanization, privacy breaches, data security vulnerabilities, and challenges related to automation. Nevertheless, AI also offers significant benefits, such as enhanced efficiency, advanced diagnostic and treatment capabilities, continuing education and training, personalized care, expanded access to information, and strengthened interdisciplinary collaboration. In conclusion, this review highlights AI’s transformative potential in nursing care, provided it is integrated ethically and responsibly. Indeed, an ethical integration of AI can contribute to better clinical outcomes and higher-quality care. Looking ahead, future research should explore strategies to minimize risks while maximizing AI’s implementation.
Keywords: artificial intelligence, ethics of technology, healthcare automation, healthcare ethics, nursing care, technological innovation.
Resumen: Esta revisión de literatura tiene como objetivo analizar los aspectos éticos de la inteligencia artificial en el cuidado de enfermería, mediante la identificación de los riesgos y las oportunidades. Se utilizó un diseño basado en la revisión de literatura reciente (2018 a 2024), analizando casos de uso y modelos de inteligencia artificial aplicados en distintos contextos de la atención en salud, especialmente en la disciplina de la enfermería. Los resultados muestran que la inteligencia artificial puede promover condiciones de desigualdad, dependencia tecnológica, sesgo algorítmico, deshumanización, posibles afectaciones en la privacidad, la seguridad de datos y la automatización. Pero también puede tener un impacto positivo en la mejora de la eficiencia, los diagnósticos y tratamientos avanzados, la educación y capacitación continua, la personalización del cuidado, el acceso a la información y la colaboración interdisciplinaria. En conclusión, esta revisión resalta el potencial transformador de la inteligencia artificial en el cuidado de enfermería, siempre que se implemente de manera ética y responsable logrando contribuir a mejorar los resultados clínicos y la calidad del cuidado. Se recomienda que las investigaciones futuras se centren en la forma de minimizar los riesgos y aprovechar al máximo su implementación.
Palabras clave: inteligencia artificial, ética en tecnología, automatización del cuidado, ética en salud, cuidado de enfermería, innovación tecnológica.
Revisión
Risks and Opportunities of Artificial Intelligence in Nursing Care: A Scoping Review
Risks and Opportunities of Artificial Intelligence in Nursing Care: A Scoping Review
Recepción: Octubre 02, 2024
Aprobación: 04 Marzo 2025
Throughout history, innovation has driven the advancement of civilization (Von Hippel & Suddendorf, 2018). From torches to electric light bulbs, technological evolution has been pivotal to human survival and progress (Molina Gómez et al., 2015). This innovative trend has extended to the development of complex systems such as Artificial Intelligence (AI), which today stands as one of humanity’s most significant breakthroughs—reflecting both technical mastery and a deeper understanding of the world (Liu et al., 2018).
AI is defined as a system’s ability to interpret data, learn from it, and adapt to achieve specific objectives (Haenlein & Kaplan, 2019). This technology mimics human cognitive functions, enabling machines to solve problems, recognize images and speech, and make informed decisions (Jiang et al., 2017). Fueled by big data and advances in computing power, AI has transitioned from experimental research to real-world integration across sectors like business and healthcare (Haenlein & Kaplan, 2019).
In healthcare, AI is revolutionizing diagnostics and treatment, delivering unprecedented precision. For instance, image recognition tools have outperformed physicians in detecting skin cancer, transforming diagnostic approaches (Haenlein & Kaplan, 2019). In nursing, AI aids in identifying warning signs and refining care strategies. However, these tools require training on clinical data, which carries inherent risks (Jiang et al., 2017). Beyond clinical applications, AI optimizes administrative efficiency—reducing costs, streamlining resources, and enhancing patient services (Sunarti et al., 2021).
Nevertheless, AI raises serious ethical dilemmas in nursing care, including potential reductions in clinical and administrative staff (López Baroni, 2019). Moreover, if trained on biased data, AI may make erroneous or unfair decisions regarding diagnoses and treatments, underscoring the need to evaluate the doctrine of double effect (Seibert et al., 2021; Sunarti et al., 2021). The fear of technology replacing human labor is not new; it echoes the 19th-century Luddite movement in England[1], where workers destroyed machines to protest job losses. In nursing, however, this scenario seems unlikely given the global nursing shortage and the irreplaceable role of compassion (García Uribe, 2020), emotions, and subjective judgment in delivering quality care (Seibert et al., 2021). Unlike the Luddites, today’s challenge lies not in resisting technology but in harnessing it as a tool.
AI also poses significant privacy and data security challenges, as breaches could compromise patient confidentiality (Stahl et al., 2023). Legal liability is another critical issue: if an AI system makes a mistake that harms a patient, who should be held accountable? (Sung et al., 2020). These dilemmas call for integrating AI into nursing care while upholding fundamental ethical principles, including autonomy, justice, beneficence, and non-maleficence (López Baroni, 2019; Sunarti et al., 2021). Considering the above, this article provides an overview of AI’s risks and opportunities in nursing, advocating for its role as a complement rather than a replacement.
This systematic review followed the PRISMA statement (Tricco et al., 2018), the standard framework for this type of study. A literature search was performed across four databases (PubMed, SciELO, Scopus, and ScienceDirect) for articles published between 2018 and 2024. Search strategies were designed using key terms in English—nursing, artificial intelligence, caring, machine learning, deep learning, and health automation—combined with Boolean operators. Search strings were refined for each database, and articles in both English and Spanish were considered.
An initial selection based on titles was conducted by downloading metadata for all articles that met the following inclusion criteria: observational, experimental, cross-sectional, diagnostic, case–control, cohort, and qualitative studies; scoping, systematic, and narrative reviews; and clinical trials. All had to be published between 2018 and 2024 and address the use of AI in nursing care. Duplicate articles were identified and removed using a categorical matrix by comparing titles, authors, and DOIs. Further filtering was performed by reviewing abstracts and full texts. Quality and risk of bias were assessed based on the study design, following Munn’s (2014) recommendations for review studies. This process was carried out independently by two researchers, and any discrepancies were resolved with the intervention of a third researcher.
The methodological quality and risk of bias of selected studies were assessed using criteria from widely recognized guidelines, including STROBE (Cuschieri, 2019) for observational and cohort studies; PRISMA (Page et al., 2021) for systematic reviews; PRISMA ScR (Tricco et al., 2018) for scoping reviews; SANRA (Baethge et al., 2019) for narrative reviews; COREQ (Tong et al., 2007) for qualitative studies; CONSORT (Butcher et al., 2022) for clinical trials; STARD (Cohen et al., 2016) for diagnostic accuracy studies; and TRIPOD (Collins et al., 2015) for prediction model studies. These guidelines were employed as frameworks to guide the evaluation of key aspects such as clarity in research objectives, methodological description and selection, study population and sample, data analysis, and reporting of biases and limitations.
Relevant data from selected studies were extracted into a predesigned Excel matrix, capturing objectives, methodologies, findings, and conclusions related to AI’s risks and opportunities in nursing care. These data were then analyzed to identify patterns, trends, and gaps in literature. Analytical categories were established inductively using a categorical matrix based on recurring patterns. Risk-related categories, on the one hand, included inequality, technological dependence, algorithmic bias, dehumanization, privacy and security, accountability and transparency, and automation. On the other hand, opportunity-related categories were efficiency improvement, advanced diagnosis and treatment, education and training, personalized care, access to information, and interdisciplinary collaboration.
From the PubMed, SciELO, Scopus, and ScienceDirect databases, a total of 1,008 articles were initially selected based on their titles. After removing 334 duplicates, 674 articles remained. A further 300 were excluded after abstract review and 189 after full-text review, using the following exclusion criteria: letters to the editor (17), editorials (16), preprints (2), full text not available (63), and lack of thematic relevance (391), leaving a total of 185 articles. Of these, 133 studies did not meet the essential requirements defined by the quality guidelines and were therefore excluded. Ultimately, 52 studies were deemed suitable for inclusion in this review. This process is illustrated in Figure 1.
The oldest study included was published in 2018, and the most recent in 2024. A total of 51 studies were sourced from PubMed and one from ScienceDirect. Table 1 presents the methodological guidelines used for evaluating each type of article, along with the average quality score and the highest and lowest ratings for each guideline.
Based on this review, the main risks and opportunities associated with AI in nursing care can be grouped into the following subcategories:
Impact on Vulnerable Groups and Individual Technical Factors. The use of AI to predict falls in hospitalized patients may exacerbate inequalities due to biases present in training data. These biases affect model fairness, potentially leading to inadequate care for marginalized groups or vulnerable populations and increasing the risk of errors in prediction and care management (Chen & Xu, 2023).
Technological and Economic Barriers. Unequal access to advanced technologies such as AI and virtual reality in nursing education puts under-resourced institutions at a disadvantage. This limits their students’ ability to develop critical skills, creating disparities in the quality of education received (Harmon et al., 2021).
Overreliance on AI Outputs. While AI has improved diagnostic accuracy and streamlined clinical processes, excessive reliance on these technological models may erode professionals’ ability to make decisions grounded in clinical judgment. This becomes problematic in complex cases where AI may fail to provide adequate solutions due to limitations in data or model training (Huqh et al., 2022).
Loss of Critical Skills. In nursing, the use of AI may limit professionals’ ability to address complex cases that do not fit predefined models (Ibuki et al., 2024). This technological dependence increases the risk of losing essential skills such as empathy and adaptability, which are crucial for providing personalized care (Ibuki et al., 2024).
Bias in Algorithm Development. Although there are notable examples of AI systems developed in collaboration with healthcare professionals, such as IBM Watson for Oncology (Somashekhar et al., 2018), which aim to address specific clinical needs, several studies highlight that such collaboration is not always the norm (Abbasgholizadeh Rahimi et al., 2021; Abràmoff et al., 2023). In some instances, algorithms are trained without sufficient understanding of the clinical context, leading to outputs that perpetuate existing biases. This underscores the importance of ensuring consistent and meaningful integration of healthcare personnel throughout the design and implementation phases of AI technologies.
Variability in Accuracy. Limitations in data generalization across clinical contexts have led to false positives and negatives, which may compromise clinical decision-making. Additionally, variability in accuracy is influenced by factors such as input data quality and imaging conditions (Li et al., 2021; Liu et al., 2019). This concern is especially critical in pediatric intensive care settings, where accuracy in predicting complications, such as venous thrombosis, is essential for patient safety (Lei et al., 2023; Li et al., 2021).
Disparity in Data Collection. The quality and diversity of datasets used to train AI models is another key contributor to algorithmic bias. Systems trained on data from specific populations often underrepresent minorities, thereby perpetuating disparities in care. This issue becomes especially problematic when attempting to predict social determinants of health, as historical data fail to adequately reflect marginalized communities (Ronquillo et al., 2022).
Reduction in Human Supervision and Skills. The adoption of technologies such as digital angiography in hemodialysis patients and AI in medical education has raised concerns about diminishing human involvement in healthcare. Excessive reliance on technology may reduce clinical supervision and hinder the development of essential human skills such as empathy and communication. As a result, future healthcare professionals may become less capable of addressing the emotional and ethical dimensions of care, due to an overemphasis on technological solutions (Lee, Wu et al., 2021; Mi, 2022).
Weakening of Patient–Provider Relationships. Patients express concern that technology could replace human interaction, undermining trust and the quality of their relationships with healthcare professionals. Moreover, inadequate training in AI use increases the risk of errors and limits the effectiveness of these tools, potentially compromising the quality of care (Fazakarley et al., 2024).
Data Processing and Protection. AI systems process vast amounts of personal data, raising concerns about how this information is protected and utilized. In many cases, patients are not fully informed about how their data are collected and processed, which may compromise their privacy (Ng et al., 2022). This is particularly critical in direct-to-consumer health applications, where lack of transparency regarding the use of personal data can result in serious vulnerabilities (He et al., 2023).
Challenges in Interoperability and Security. The use of AI chatbots in programs such as weight loss interventions has demonstrated that integration with multiple devices and platforms can heighten the risks to personal data security. Users may be unaware of the implications of such integration, leaving them vulnerable to privacy breaches if the platforms involved do not implement adequate safeguards (Chew, 2022).
Transparency in Decision-Making and Accountability. The opacity of AI systems prevents healthcare professionals and patients from understanding how clinical decisions are made, complicating accountability in the event of harm or error (Masoumian Hosseini et al., 2023). In critical situations, such as emergency care, the absence of clear explanations from AI systems may erode trust and limit human intervention, worsening the impact of potential failures (Barwise et al., 2024; Masoumian Hosseini et al., 2023).
Need for Clear Accountability Mechanisms. There is a pressing need to clearly define responsibility in cases of error. Automated systems often make decisions without direct professional input, making it difficult to assign accountability when problems arise (Barwise et al., 2024). This highlights the importance of establishing regulatory frameworks and oversight mechanisms to determine who should be held accountable—whether technology developers, healthcare providers, or both.
Reduction in Human Interaction. The automation of processes such as pain assessment through AI may reduce direct interaction between healthcare professionals and patients, potentially dehumanizing care by limiting empathy and negatively affecting the quality of the service provided. Furthermore, when AI models lack representative data, their effectiveness across different clinical settings may be compromised, raising concerns about their universal applicability (Abuzaid et al., 2022; Zhang et al., 2023).
Job Uncertainty. Automation raises concerns about its impact on employment. As more tasks become automated, the demand for personnel may decline, creating uncertainty regarding job stability. This issue has been observed in other sectors, where workers experience high stress levels due to job insecurity caused by automation. Moreover, the psychosocial factors that affect worker well-being are not always considered (Abuzaid et al., 2022; Cheng et al., 2021).
Table 2 shows how studies were grouped by thematic categories related to the phenomenon of interest
Optimization of Clinical Resources and Workflows. In certain settings, AI has proven effective in optimizing workflows and resource management in clinical care. From automating documentation and creating personalized care plans to predicting the risk of falls and pressure ulcers, AI has enabled nurses to focus more on direct patient care, thereby improving the quality of service (Ng et al., 2022). In hospital management, AI has facilitated the prediction of length of stay and mortality in patients with diabetes and hypertension, optimizing bed utilization and other clinical resources (Barsasella et al., 2022).
Support for Clinical Decision-Making. AI has been shown to support clinical decision-making in areas such as hospital readmission prediction and respiratory infection management. Its implementation has significantly reduced rehospitalization rates, improving clinical outcomes (Romero-Brufau et al., 2020). In addition, some systems have assisted in patient triage by offering monitoring intervals based on clinical guidelines, thereby improving diagnostic accuracy (Li et al., 2022). In cases involving language barriers, AI has enhanced equity in care by quickly identifying patients who require interpretation services (Barwise et al., 2024).
Reduction of Workload. In the nursing field, AI has helped reduce workload by automating routine tasks such as intravenous bag monitoring, pressure injury management, and administrative duties, freeing up time for other responsibilities (Hwang et al., 2023; Chen et al., 2022). The introduction of robots for repetitive tasks has reduced physical exhaustion, allowing professionals to focus on clinical judgment and empathetic care (Zrínyi et al., 2022).
Prediction of Complications. AI has been essential in predicting complications. In diabetic patients, AI models identified risks of neuropathy, nephropathy, retinopathy, and amputations, enabling personalized interventions (Gosak et al., 2022; Mousa et al., 2023). In hospitalized patients, AI predicted severe hypoglycemia and the risk of developing pressure injuries, facilitating early identification of adverse events (Fralick et al., 2021; Pei et al., 2023).
Real-Time Monitoring. AI has also enhanced real-time monitoring in various areas. In emergency departments, algorithms predicted complications in patients experiencing hyperglycemic crises (Hsu et al., 2023).
Support for Mental Health and Treatment Adherence. In the mental health field, AI accurately interpreted human emotions, aiding in diagnosis and emotional support (Elyoseph et al., 2024). AI models predicted depressive symptoms in older adults without requiring questionnaires (Susanty et al., 2023). Moreover, AI helped monitor treatment adherence through smartwatches, ensuring proper medication intake (Odhiambo et al., 2023).
The integration of AI and virtual reality has been shown to enhance pain management education, allowing students to practice in simulated environments without posing risks to patients. Furthermore, AI personalizes learning scenarios, improving knowledge retention (Harmon et al., 2021). In clinical practice, AI has supported evidence-based decision-making, promoting continuous education for nursing professionals and helping them stay current with recent practices and technologies (Abuzaid et al., 2022). In addition, the use of humanoid robots in hospital settings has also proven effective in promoting health literacy and vaccine comprehension (McIntosh et al., 2022).
Continuous and Emotional Care. AI and autonomous robots have transformed the care of patients with chronic illnesses and advanced dementia, demonstrating reductions in agitation, improved emotional well-being, and more patient-centered attention to emotional and social needs (David et al., 2021). These technologies also automate physical tasks such as mobilization, allowing staff to devote more time to specialized care (Cai et al., 2021; Stokes & Palmer, 2020).
Personalized Risk Prediction. AI has shown efficacy in clinical risk prevention. In nursing homes, predictive models identified key factors for preventing pressure ulcers (Lee, Shin et al., 2021). In home care, AI has enhanced hospitalization prediction by analyzing clinical notes (Topaz et al., 2020). AI also optimizes remote monitoring of older adults, detecting falls and cognitive decline, and supports health coaching for patients with type 2 diabetes as well as interventions for autism spectrum disorder (Schütz et al., 2022; Di et al., 2022; Jia et al., 2023).
AI has significantly improved access to critical information in hospital settings, enabling more informed and efficient decision-making in high-pressure situations. For example, AI-powered chatbots in hospitals have provided quick and accurate responses to caregiver inquiries, reducing errors and workload (Daniel et al., 2022). In emergency departments, AI-assisted systems for real-time medical record entry (via voice-to-text data conversion) have improved triage efficiency and accuracy by capturing more relevant information for patient care (Cho et al., 2022).
The integration of AI into healthcare has created new opportunities for interdisciplinary collaboration, as nurses, software engineers, and other professionals work together to develop more optimized technologies (Shang, 2021). Involving patients and the public in the design of these tools ensures their functionality while fostering more inclusive and equitable care that adapts to diverse user needs (Zidaru et al., 2021).
The findings mentioned above are summarized in Table 3. The studies were grouped into thematic categories related to the phenomenon of interest.
Despite the relatively recent application of AI in healthcare, its impact has generated a substantial and rapidly evolving body of evidence highlighting both risks and opportunities. Technology should not be viewed as a neutral instrument or a threat to humanized care, but rather as a tool that expands and redefines human capabilities by integrating technological advancements with essential human skills in nursing care (Monterroza Ríos et al., 2015). In this context, the notion of technocare emerges as a central axis for balancing the technical and human dimensions of care (García-Uribe et al., 2024).
The evidence indicates that AI is not without risks, which calls for a prudent and reflective approach to its applications in nursing. An Aristotelian perspective may be useful in finding a balanced position regarding the use of these technologies in healthcare. Issues such as inequality—particularly notable in the scarcity of studies from several Latin American countries, reflecting disparities in AI access—and automation hold special significance in already inequitable geopolitical contexts with major social challenges. The risks associated with technological dependence and the dehumanization of care require in-depth analysis, especially since humans have become “techno-persons” (Fazakarley et al., 2024; Huqh et al., 2022; Lee, Wu et al., 2021; Mi, 2022). Thus, care in the 21st century cannot be conceived devoid of technology.
Nonetheless, the results of this review also highlight the opportunities AI offers to transform nursing care. The automation of routine tasks, such as clinical monitoring and administrative management, can free up time for professionals to focus on activities that require essential human qualities (Noorbakhsh-Sabet et al., 2019). Additionally, AI’s predictive capabilities have proven valuable for anticipating complications and personalizing interventions, thereby improving clinical outcomes and enhancing the overall quality of care (Ng et al., 2022; Gosak et al., 2022).
A conscious examination of the applications and implications of AI in nursing leads to the recognition of both its potential and risks, calling for an Aristotelian balance between caution and exploration. In this regard, scholars such as Maliandi (Salerno, 2016) argue that new technologies may create biotechnological ethical dilemas. Exploring the principles proposed by this author—precaution, exploration, non-discrimination, and respect for diversity—may be a valuable framework for situating a prudent balance in relation to the risks and opportunities posed by AI in nursing care.
It is important to note that AI is evolving at an extraordinary pace, and the information presented in this article may quickly become outdated. Consequently, maintaining an open and flexible perspective on these advancements will be crucial not only to mitigate implementation risks but also to harness AI’s potential for transforming nursing practice toward more efficient, equitable, and human-centered care.
AI holds significant potential to redefine nursing care, from improving operational efficiency to optimizing interdisciplinary collaboration. Although ethical concerns have been identified, the responsible adoption of AI could substantially enhance the quality of care and clinical outcomes.
To ensure that the implementation of AI in nursing is both effective and ethical, it is essential to establish ethical guidelines and regulatory frameworks that prioritize data protection, transparency, and respect for patient dignity. This would allow AI to become a reliable and equitable tool within healthcare systems. Future research should focus on evaluating strategies to minimize risks and maximize opportunities, ensuring that AI implementation contributes meaningfully to progress in this technological era.
How to reference: Castrillón Isaza, K. A., Giraldo Restrepo, J. C., & García Uribe, J. C. (2025). Risks and Opportunities of Artificial Intelligence in Nursing Care: A Scoping Review. Trilogía Ciencia Tecnología Sociedad, 17(35), e3272. https://doi.org/10.22430/21457778.3272
redalyc-journal-id: 5343
https://revistas.itm.edu.co/index.php/trilogia/article/view/3272 (html)
The authors declare no financial, professional, or personal conflicts of interest that could have unduly influenced the results or interpretations presented in this article.
All authors contributed to the development of the conceptual ideas, the design of the conepts, the analytical reflections, and the drafting and final revision of this manuscript.