Revisión
Risks and Opportunities of Artificial Intelligence in Nursing Care: A Scoping Review
Risks and Opportunities of Artificial Intelligence in Nursing Care: A Scoping Review
Risks and Opportunities of Artificial Intelligence in Nursing Care: A Scoping Review
Trilogía Ciencia Tecnología Sociedad, vol. 17, núm. 35, pp. 1-27, 2025
Instituto Tecnológico Metropolitano
Recepción: Octubre 02, 2024
Aprobación: 04 Marzo 2025
Abstract: This scoping review aims to analyze the ethical implications of Artificial Intelligence (AI) in nursing care by identifying its associated risks and opportunities. To achieve this, a literature review was conducted, examining research published between 2018 and 2024. Specifically, the analysis focused on AI applications and models implemented in diverse healthcare contexts, with particular attention to nursing. The results show that AI poses several risks, including increased inequality, technology dependence, algorithmic bias, dehumanization, privacy breaches, data security vulnerabilities, and challenges related to automation. Nevertheless, AI also offers significant benefits, such as enhanced efficiency, advanced diagnostic and treatment capabilities, continuing education and training, personalized care, expanded access to information, and strengthened interdisciplinary collaboration. In conclusion, this review highlights AI’s transformative potential in nursing care, provided it is integrated ethically and responsibly. Indeed, an ethical integration of AI can contribute to better clinical outcomes and higher-quality care. Looking ahead, future research should explore strategies to minimize risks while maximizing AI’s implementation.
Keywords: artificial intelligence, ethics of technology, healthcare automation, healthcare ethics, nursing care, technological innovation.
Resumen: Esta revisión de literatura tiene como objetivo analizar los aspectos éticos de la inteligencia artificial en el cuidado de enfermería, mediante la identificación de los riesgos y las oportunidades. Se utilizó un diseño basado en la revisión de literatura reciente (2018 a 2024), analizando casos de uso y modelos de inteligencia artificial aplicados en distintos contextos de la atención en salud, especialmente en la disciplina de la enfermería. Los resultados muestran que la inteligencia artificial puede promover condiciones de desigualdad, dependencia tecnológica, sesgo algorítmico, deshumanización, posibles afectaciones en la privacidad, la seguridad de datos y la automatización. Pero también puede tener un impacto positivo en la mejora de la eficiencia, los diagnósticos y tratamientos avanzados, la educación y capacitación continua, la personalización del cuidado, el acceso a la información y la colaboración interdisciplinaria. En conclusión, esta revisión resalta el potencial transformador de la inteligencia artificial en el cuidado de enfermería, siempre que se implemente de manera ética y responsable logrando contribuir a mejorar los resultados clínicos y la calidad del cuidado. Se recomienda que las investigaciones futuras se centren en la forma de minimizar los riesgos y aprovechar al máximo su implementación.
Palabras clave: inteligencia artificial, ética en tecnología, automatización del cuidado, ética en salud, cuidado de enfermería, innovación tecnológica.
INTRODUCTION
Throughout history, innovation has driven the advancement of civilization (Von Hippel & Suddendorf, 2018). From torches to electric light bulbs, technological evolution has been pivotal to human survival and progress (Molina Gómez et al., 2015). This innovative trend has extended to the development of complex systems such as Artificial Intelligence (AI), which today stands as one of humanity’s most significant breakthroughs—reflecting both technical mastery and a deeper understanding of the world (Liu et al., 2018).
AI is defined as a system’s ability to interpret data, learn from it, and adapt to achieve specific objectives (Haenlein & Kaplan, 2019). This technology mimics human cognitive functions, enabling machines to solve problems, recognize images and speech, and make informed decisions (Jiang et al., 2017). Fueled by big data and advances in computing power, AI has transitioned from experimental research to real-world integration across sectors like business and healthcare (Haenlein & Kaplan, 2019).
In healthcare, AI is revolutionizing diagnostics and treatment, delivering unprecedented precision. For instance, image recognition tools have outperformed physicians in detecting skin cancer, transforming diagnostic approaches (Haenlein & Kaplan, 2019). In nursing, AI aids in identifying warning signs and refining care strategies. However, these tools require training on clinical data, which carries inherent risks (Jiang et al., 2017). Beyond clinical applications, AI optimizes administrative efficiency—reducing costs, streamlining resources, and enhancing patient services (Sunarti et al., 2021).
Nevertheless, AI raises serious ethical dilemmas in nursing care, including potential reductions in clinical and administrative staff (López Baroni, 2019). Moreover, if trained on biased data, AI may make erroneous or unfair decisions regarding diagnoses and treatments, underscoring the need to evaluate the doctrine of double effect (Seibert et al., 2021; Sunarti et al., 2021). The fear of technology replacing human labor is not new; it echoes the 19th-century Luddite movement in England[1], where workers destroyed machines to protest job losses. In nursing, however, this scenario seems unlikely given the global nursing shortage and the irreplaceable role of compassion (García Uribe, 2020), emotions, and subjective judgment in delivering quality care (Seibert et al., 2021). Unlike the Luddites, today’s challenge lies not in resisting technology but in harnessing it as a tool.
AI also poses significant privacy and data security challenges, as breaches could compromise patient confidentiality (Stahl et al., 2023). Legal liability is another critical issue: if an AI system makes a mistake that harms a patient, who should be held accountable? (Sung et al., 2020). These dilemmas call for integrating AI into nursing care while upholding fundamental ethical principles, including autonomy, justice, beneficence, and non-maleficence (López Baroni, 2019; Sunarti et al., 2021). Considering the above, this article provides an overview of AI’s risks and opportunities in nursing, advocating for its role as a complement rather than a replacement.
METHODOLOGY
This systematic review followed the PRISMA statement (Tricco et al., 2018), the standard framework for this type of study. A literature search was performed across four databases (PubMed, SciELO, Scopus, and ScienceDirect) for articles published between 2018 and 2024. Search strategies were designed using key terms in English—nursing, artificial intelligence, caring, machine learning, deep learning, and health automation—combined with Boolean operators. Search strings were refined for each database, and articles in both English and Spanish were considered.
An initial selection based on titles was conducted by downloading metadata for all articles that met the following inclusion criteria: observational, experimental, cross-sectional, diagnostic, case–control, cohort, and qualitative studies; scoping, systematic, and narrative reviews; and clinical trials. All had to be published between 2018 and 2024 and address the use of AI in nursing care. Duplicate articles were identified and removed using a categorical matrix by comparing titles, authors, and DOIs. Further filtering was performed by reviewing abstracts and full texts. Quality and risk of bias were assessed based on the study design, following Munn’s (2014) recommendations for review studies. This process was carried out independently by two researchers, and any discrepancies were resolved with the intervention of a third researcher.
The methodological quality and risk of bias of selected studies were assessed using criteria from widely recognized guidelines, including STROBE (Cuschieri, 2019) for observational and cohort studies; PRISMA (Page et al., 2021) for systematic reviews; PRISMA ScR (Tricco et al., 2018) for scoping reviews; SANRA (Baethge et al., 2019) for narrative reviews; COREQ (Tong et al., 2007) for qualitative studies; CONSORT (Butcher et al., 2022) for clinical trials; STARD (Cohen et al., 2016) for diagnostic accuracy studies; and TRIPOD (Collins et al., 2015) for prediction model studies. These guidelines were employed as frameworks to guide the evaluation of key aspects such as clarity in research objectives, methodological description and selection, study population and sample, data analysis, and reporting of biases and limitations.
Relevant data from selected studies were extracted into a predesigned Excel matrix, capturing objectives, methodologies, findings, and conclusions related to AI’s risks and opportunities in nursing care. These data were then analyzed to identify patterns, trends, and gaps in literature. Analytical categories were established inductively using a categorical matrix based on recurring patterns. Risk-related categories, on the one hand, included inequality, technological dependence, algorithmic bias, dehumanization, privacy and security, accountability and transparency, and automation. On the other hand, opportunity-related categories were efficiency improvement, advanced diagnosis and treatment, education and training, personalized care, access to information, and interdisciplinary collaboration.
RESULTS
From the PubMed, SciELO, Scopus, and ScienceDirect databases, a total of 1,008 articles were initially selected based on their titles. After removing 334 duplicates, 674 articles remained. A further 300 were excluded after abstract review and 189 after full-text review, using the following exclusion criteria: letters to the editor (17), editorials (16), preprints (2), full text not available (63), and lack of thematic relevance (391), leaving a total of 185 articles. Of these, 133 studies did not meet the essential requirements defined by the quality guidelines and were therefore excluded. Ultimately, 52 studies were deemed suitable for inclusion in this review. This process is illustrated in Figure 1.
The oldest study included was published in 2018, and the most recent in 2024. A total of 51 studies were sourced from PubMed and one from ScienceDirect. Table 1 presents the methodological guidelines used for evaluating each type of article, along with the average quality score and the highest and lowest ratings for each guideline.
Guideline | Number of articles | Average Quality Score | Highest Score | Lowest Score |
STROBE | 12 | 97% | 100% | 91% |
PRISMA | 7 | 92% | 100% | 85% |
PRISMA-ScR | 13 | 92% | 98% | 86% |
SANRA | 2 | 88% | 92% | 83% |
COREQ | 1 | 91% | 91% | 91% |
CONSORT | 5 | 88% | 92% | 84% |
STARD | 7 | 88% | 93% | 80% |
TRIPOD | 5 | 87% | 93% | 81% |
Based on this review, the main risks and opportunities associated with AI in nursing care can be grouped into the following subcategories:
Risks of Artificial Intelligence in Nursing Care
Inequality
Impact on Vulnerable Groups and Individual Technical Factors. The use of AI to predict falls in hospitalized patients may exacerbate inequalities due to biases present in training data. These biases affect model fairness, potentially leading to inadequate care for marginalized groups or vulnerable populations and increasing the risk of errors in prediction and care management (Chen & Xu, 2023).
Technological and Economic Barriers. Unequal access to advanced technologies such as AI and virtual reality in nursing education puts under-resourced institutions at a disadvantage. This limits their students’ ability to develop critical skills, creating disparities in the quality of education received (Harmon et al., 2021).
Technological Dependence
Overreliance on AI Outputs. While AI has improved diagnostic accuracy and streamlined clinical processes, excessive reliance on these technological models may erode professionals’ ability to make decisions grounded in clinical judgment. This becomes problematic in complex cases where AI may fail to provide adequate solutions due to limitations in data or model training (Huqh et al., 2022).
Loss of Critical Skills. In nursing, the use of AI may limit professionals’ ability to address complex cases that do not fit predefined models (Ibuki et al., 2024). This technological dependence increases the risk of losing essential skills such as empathy and adaptability, which are crucial for providing personalized care (Ibuki et al., 2024).
Biases
Bias in Algorithm Development. Although there are notable examples of AI systems developed in collaboration with healthcare professionals, such as IBM Watson for Oncology (Somashekhar et al., 2018), which aim to address specific clinical needs, several studies highlight that such collaboration is not always the norm (Abbasgholizadeh Rahimi et al., 2021; Abràmoff et al., 2023). In some instances, algorithms are trained without sufficient understanding of the clinical context, leading to outputs that perpetuate existing biases. This underscores the importance of ensuring consistent and meaningful integration of healthcare personnel throughout the design and implementation phases of AI technologies.
Variability in Accuracy. Limitations in data generalization across clinical contexts have led to false positives and negatives, which may compromise clinical decision-making. Additionally, variability in accuracy is influenced by factors such as input data quality and imaging conditions (Li et al., 2021; Liu et al., 2019). This concern is especially critical in pediatric intensive care settings, where accuracy in predicting complications, such as venous thrombosis, is essential for patient safety (Lei et al., 2023; Li et al., 2021).
Disparity in Data Collection. The quality and diversity of datasets used to train AI models is another key contributor to algorithmic bias. Systems trained on data from specific populations often underrepresent minorities, thereby perpetuating disparities in care. This issue becomes especially problematic when attempting to predict social determinants of health, as historical data fail to adequately reflect marginalized communities (Ronquillo et al., 2022).
Dehumanization
Reduction in Human Supervision and Skills. The adoption of technologies such as digital angiography in hemodialysis patients and AI in medical education has raised concerns about diminishing human involvement in healthcare. Excessive reliance on technology may reduce clinical supervision and hinder the development of essential human skills such as empathy and communication. As a result, future healthcare professionals may become less capable of addressing the emotional and ethical dimensions of care, due to an overemphasis on technological solutions (Lee, Wu et al., 2021; Mi, 2022).
Weakening of Patient–Provider Relationships. Patients express concern that technology could replace human interaction, undermining trust and the quality of their relationships with healthcare professionals. Moreover, inadequate training in AI use increases the risk of errors and limits the effectiveness of these tools, potentially compromising the quality of care (Fazakarley et al., 2024).
Privacy and Data Security
Data Processing and Protection. AI systems process vast amounts of personal data, raising concerns about how this information is protected and utilized. In many cases, patients are not fully informed about how their data are collected and processed, which may compromise their privacy (Ng et al., 2022). This is particularly critical in direct-to-consumer health applications, where lack of transparency regarding the use of personal data can result in serious vulnerabilities (He et al., 2023).
Challenges in Interoperability and Security. The use of AI chatbots in programs such as weight loss interventions has demonstrated that integration with multiple devices and platforms can heighten the risks to personal data security. Users may be unaware of the implications of such integration, leaving them vulnerable to privacy breaches if the platforms involved do not implement adequate safeguards (Chew, 2022).
Accountability and Transparency
Transparency in Decision-Making and Accountability. The opacity of AI systems prevents healthcare professionals and patients from understanding how clinical decisions are made, complicating accountability in the event of harm or error (Masoumian Hosseini et al., 2023). In critical situations, such as emergency care, the absence of clear explanations from AI systems may erode trust and limit human intervention, worsening the impact of potential failures (Barwise et al., 2024; Masoumian Hosseini et al., 2023).
Need for Clear Accountability Mechanisms. There is a pressing need to clearly define responsibility in cases of error. Automated systems often make decisions without direct professional input, making it difficult to assign accountability when problems arise (Barwise et al., 2024). This highlights the importance of establishing regulatory frameworks and oversight mechanisms to determine who should be held accountable—whether technology developers, healthcare providers, or both.
Automation
Reduction in Human Interaction. The automation of processes such as pain assessment through AI may reduce direct interaction between healthcare professionals and patients, potentially dehumanizing care by limiting empathy and negatively affecting the quality of the service provided. Furthermore, when AI models lack representative data, their effectiveness across different clinical settings may be compromised, raising concerns about their universal applicability (Abuzaid et al., 2022; Zhang et al., 2023).
Job Uncertainty. Automation raises concerns about its impact on employment. As more tasks become automated, the demand for personnel may decline, creating uncertainty regarding job stability. This issue has been observed in other sectors, where workers experience high stress levels due to job insecurity caused by automation. Moreover, the psychosocial factors that affect worker well-being are not always considered (Abuzaid et al., 2022; Cheng et al., 2021).
Table 2 shows how studies were grouped by thematic categories related to the phenomenon of interest
Author(s) and year of publication | Country | Study design | Population / Study object | Conclusion |
Harmon et al. (2021) | Australia | Scoping review | Nurses and students | Nurses play a key role in assessing and managing pain through clinical and communication skills. |
Chen and Xu (2023) | Taiwan | Case–control study | Patients | Machine learning enhances fall risk assessment, optimizing patient care and reducing staff workload. |
Huqh et al. (2022) | Malaysia | Systematic review | Articles | AI enables advanced software for detection, cephalometric analysis, clinical decision-making, and treatment prediction. |
Ibuki et al. (2024) | Japan | Narrative review | Articles | AI implementation in nursing should be balanced with human involvement to preserve core care values and uphold ethics. |
Tran et al. (2019) | United States | Systematic review | Bibliographic data | AI research in oncology focuses on improving detection, therapies, personalized medicine, and patient-reported outcomes. |
Somashekhar et al. (2018) | India | Retrospective observational study | Breast cancer patients | Treatment recommendations made by the AI Watson for Oncology and a multidisciplinary tumor board were highly concordant, especially in resource-limited settings. |
Abbasgholizadeh Rahimi et al. (2021) | Canada | Scoping review | Studies implementing AI | Variations in AI use in community-based primary health care highlight the need for further research to improve implementation. |
Abràmoff et al. (2023) | United States | Narrative review | Not applicable | AI integration must address bias at all stages of the product lifecycle to ensure equity and effectiveness. |
Ronquillo et al. (2022) | Canada | Scoping review | Not applicable | AI systems lack equity considerations, though incorporating nursing and health data could mitigate this limitation. |
Liu et al. (2019) | China | Cohort study | Electronic medical records | Machine learning outperforms existing criteria in identifying cancer patients at high risk of PICC-related thrombosis. |
Li et al. (2021) | China | Retrospective cohort study | Pediatric ICU patients | 15% of children in the ICU are at high risk of cadvt. |
Lei et al. (2023) | China | Retrospective cohort study | Critically ill children | Machine learning models can identify delirium risk in critically ill children after 24 hours in pediatric ICUs. |
Mi (2022) | China | Experimental study | Patients | Patients with MHD were grouped to assess the diagnostic value of DSA in autogenous AVFs. |
Lee, Wu et al. (2021) | Canada | Scoping review | Scientific articles | There is no consensus on what and how AI should be taught in undergraduate medical education. A standardized competency framework is needed. |
Fazakarley et al. (2024) | United Kingdom | Systematic review | Articles | The study emphasizes the need for improved communication and qualitative research to mitigate AI risks and maximize its benefits. |
Ng et al. (2022) | Singapore | Scoping review | Not applicable | AI can enhance nursing care, but more clinical trials in real-world settings are required. |
He et al. (2023) | China | Scoping review | Studies on direct-to-consumer AI health apps. | DTC health AI applications present both risks and opportunities that require ongoing evaluation. |
Chew (2022) | Singapore | Scoping review | Prior studies | AI chatbots should incorporate humanized, personalized, and engaging features to improve user experience and behavioral change. |
Barwise et al. (2024) | United States | Qualitative observational study | Interviews | AI can improve care quality and reduce disparities by prioritizing patients with language barriers for interpretation services. |
Masoumian Hosseini et al. (2023) | Iran | Scoping review | Prior studies | AI research in emergency medicine has grown, demonstrating predictive model potential but lacking decision-making transparency. |
Zhang et al. (2023) | United States | Scoping review | Not applicable | AI interventions improve pain recognition, prediction, and self-management, though most studies remain at the pilot stage. |
Abuzaid et al. (2022) | United Kingdom | Cross-sectional study | Nurses | The nursing profession lacks comprehensive AI understanding, requiring enhanced education for safe clinical integration. |
Cheng et al. (2021) | Taiwan | Cross-sectional study | Workers in various automation-level jobs | Workers in less automatable occupations may experience stress not captured by traditional measurement tools. |
Opportunities for Artificial Intelligence in Nursing Care
Improved Efficiency
Optimization of Clinical Resources and Workflows. In certain settings, AI has proven effective in optimizing workflows and resource management in clinical care. From automating documentation and creating personalized care plans to predicting the risk of falls and pressure ulcers, AI has enabled nurses to focus more on direct patient care, thereby improving the quality of service (Ng et al., 2022). In hospital management, AI has facilitated the prediction of length of stay and mortality in patients with diabetes and hypertension, optimizing bed utilization and other clinical resources (Barsasella et al., 2022).
Support for Clinical Decision-Making. AI has been shown to support clinical decision-making in areas such as hospital readmission prediction and respiratory infection management. Its implementation has significantly reduced rehospitalization rates, improving clinical outcomes (Romero-Brufau et al., 2020). In addition, some systems have assisted in patient triage by offering monitoring intervals based on clinical guidelines, thereby improving diagnostic accuracy (Li et al., 2022). In cases involving language barriers, AI has enhanced equity in care by quickly identifying patients who require interpretation services (Barwise et al., 2024).
Reduction of Workload. In the nursing field, AI has helped reduce workload by automating routine tasks such as intravenous bag monitoring, pressure injury management, and administrative duties, freeing up time for other responsibilities (Hwang et al., 2023; Chen et al., 2022). The introduction of robots for repetitive tasks has reduced physical exhaustion, allowing professionals to focus on clinical judgment and empathetic care (Zrínyi et al., 2022).
Advanced Diagnosis and Treatment
Prediction of Complications. AI has been essential in predicting complications. In diabetic patients, AI models identified risks of neuropathy, nephropathy, retinopathy, and amputations, enabling personalized interventions (Gosak et al., 2022; Mousa et al., 2023). In hospitalized patients, AI predicted severe hypoglycemia and the risk of developing pressure injuries, facilitating early identification of adverse events (Fralick et al., 2021; Pei et al., 2023).
Real-Time Monitoring. AI has also enhanced real-time monitoring in various areas. In emergency departments, algorithms predicted complications in patients experiencing hyperglycemic crises (Hsu et al., 2023).
Support for Mental Health and Treatment Adherence. In the mental health field, AI accurately interpreted human emotions, aiding in diagnosis and emotional support (Elyoseph et al., 2024). AI models predicted depressive symptoms in older adults without requiring questionnaires (Susanty et al., 2023). Moreover, AI helped monitor treatment adherence through smartwatches, ensuring proper medication intake (Odhiambo et al., 2023).
Continuous Education and Training
The integration of AI and virtual reality has been shown to enhance pain management education, allowing students to practice in simulated environments without posing risks to patients. Furthermore, AI personalizes learning scenarios, improving knowledge retention (Harmon et al., 2021). In clinical practice, AI has supported evidence-based decision-making, promoting continuous education for nursing professionals and helping them stay current with recent practices and technologies (Abuzaid et al., 2022). In addition, the use of humanoid robots in hospital settings has also proven effective in promoting health literacy and vaccine comprehension (McIntosh et al., 2022).
Personalized Care
Continuous and Emotional Care. AI and autonomous robots have transformed the care of patients with chronic illnesses and advanced dementia, demonstrating reductions in agitation, improved emotional well-being, and more patient-centered attention to emotional and social needs (David et al., 2021). These technologies also automate physical tasks such as mobilization, allowing staff to devote more time to specialized care (Cai et al., 2021; Stokes & Palmer, 2020).
Personalized Risk Prediction. AI has shown efficacy in clinical risk prevention. In nursing homes, predictive models identified key factors for preventing pressure ulcers (Lee, Shin et al., 2021). In home care, AI has enhanced hospitalization prediction by analyzing clinical notes (Topaz et al., 2020). AI also optimizes remote monitoring of older adults, detecting falls and cognitive decline, and supports health coaching for patients with type 2 diabetes as well as interventions for autism spectrum disorder (Schütz et al., 2022; Di et al., 2022; Jia et al., 2023).
Access to Information
AI has significantly improved access to critical information in hospital settings, enabling more informed and efficient decision-making in high-pressure situations. For example, AI-powered chatbots in hospitals have provided quick and accurate responses to caregiver inquiries, reducing errors and workload (Daniel et al., 2022). In emergency departments, AI-assisted systems for real-time medical record entry (via voice-to-text data conversion) have improved triage efficiency and accuracy by capturing more relevant information for patient care (Cho et al., 2022).
Interdisciplinary Collaboration
The integration of AI into healthcare has created new opportunities for interdisciplinary collaboration, as nurses, software engineers, and other professionals work together to develop more optimized technologies (Shang, 2021). Involving patients and the public in the design of these tools ensures their functionality while fostering more inclusive and equitable care that adapts to diverse user needs (Zidaru et al., 2021).
The findings mentioned above are summarized in Table 3. The studies were grouped into thematic categories related to the phenomenon of interest.
Author(s) and year of publication’ | Country | Study design | Population / Study object | Conclusion |
Ng et al. (2022) | Singapore | Scoping review | Not applicable | AI can improve the quality of nursing care. |
Romero-Brufau et al. (2020) | United States | Experimental study | Hospitalized patients | AI reduces hospital readmission rates by supporting clinical decision-making. |
Zhan et al. (2020) | United States | Observational study | Medicare beneficiaries | Personalized machine learning enables more precise and effective management of respiratory infection outbreaks. |
Li et al. (2022) | China | Cohort study | Patients undergoing upper endoscopy | ENDOANGEL-AS shows high accuracy and efficiency in monitoring high-risk patients for upper gastrointestinal cancer. |
Hwang et al. (2023) | South Korea | Experimental study | Images for training, validation, and testing | A deep learning-based smart pole was developed to monitor intravenous bags in real time, thus preventing errors and reducing nurses’ workload |
Barsasella et al. (2022) | Taiwan | Experimental study | Patients with type 2 diabetes and hypertension | An effective model was created to predict hospital stay duration and mortality in patients with T2DM and hypertension. |
Zrínyi et al. (2022) | Hungary | Observational study | Nurses | Nurses expressed reservations about robotic self-learning capabilities, reflecting concerns about core competency preservation. |
Barwise et al. (2024) | United States | Qualitative observational study | Interviews | AI can prioritize patients with language barriers, reducing healthcare disparities. |
Chen et al. (2022) | China | Narrative review | Not applicable | A conceptual gap exists that needs to be addressed, recognizing potential yet undocumented advances. |
Gosak et al. (2022) | Slovenia | Systematic review | Not applicable | AI accurately predicts diabetes complications and is a key tool in preventive nursing care. |
Mousa et al. (2023) | Egypt | Case–control study | Patients | AI enables high-accuracy prediction of diabetic foot ulcers. |
Hsu et al. (2023) | Taiwan | Cohort study | Electronic medical records | Real-time AI is a promising tool for predicting adverse outcomes in patients with hyperglycemic crises in emergency care. |
Pei et al. (2023) | China | Systematic review | Articles | Machine learning models show excellent performance in pressure injury prediction. |
Fralick et al. (2021) | Canada | Observational study | Electronic medical records | Machine learning accurately identifies hospitalized patients at high risk of hypoglycemia, with potential to improve clinical outcomes. |
Susanty et al. (2023) | Taiwan | Observational study | Clinical and demographic data of older adults | The GDS-15 can be used for follow-up measures, saving time and effort for both healthcare providers and older adults, especially those who are illiterate. |
Odhiambo et al. (2023) | United States | Experimental study | Medication-taking gestures | Smartwatches accurately and non-intrusively monitor medication-taking gestures, potentially improving medication adherence. |
Elyoseph et al. (2024) | Israel | Pilot experimental study | Not applicable | ChatGPT-4 achieves human-standard visual mentalization, while Bard requires improvement in emotion interpretation. |
Harmon et al. (2021) | Australia | Scoping review | Nurses and students | Nurses play a key role in pain assessment and treatment, combining clinical and communication skills. |
Abuzaid et al. (2022) | United Kingdom | Cross-sectional study | Nurses | More AI education is needed in nursing to ensure safe and effective integration into practice. |
McIntosh et al. (2022) | Australia | Experimental study | Patients, visitors, and hospital staff | A social humanoid robot proved useful in increasing knowledge on influenza prevention and promoting positive attitudes toward vaccination. |
David et al. (2021) | Romania | Systematic review | Articles | Autonomous robotic applications are a viable and cost-effective solution for patients with advanced dementia, benefiting both patients and the healthcare system. |
Stokes and Palmer (2020) | United States | Narrative review | Not applicable | Ethical AI implementation in nursing should respect core values, complement human care, and enhance the profession’s unique human aspects. |
Lee, Shin et al. (2021) | South Korea | Experimental study | Nursing home residents | The random forest model was the most accurate in predicting pressure ulcers in nursing homes, identifying resident-related and environmental factors. |
Topaz et al. (2020) | United States | Retrospective observational study | Clinical notes | This study explored how home healthcare clinical notes can help identify patients at risk of hospitalization or emergency department visits. |
Schütz et al. (2022) | Switzerland | Observational study | Data from contactless sensors in nursing homes | Combined with machine learning, these assessments are as effective as mobile approaches and enable large-scale identification of new digital biomarkers. |
Di et al. (2022) | Canada | Prospective observational study | Clinical and demographic data | Reinforcement learning models can automate diabetes coaching and improve health outcomes. |
Jia et al. (2023) | China | Systematic review | Articles in Databases | This study analyzes AI in ASD, highlighting the USA and China as leaders in publication volume, the UK as the most authoritative country in the area, and Stanford University as the most influential institution. |
Cai et al. (2021) | China | Experimental study | Human body images | This article presents a 3D human pose estimation system for home care robots using Kinect 2 and image processing to predict joint and axillary positions. |
Daniel et al. (2022) | France | Experimental study | Caregivers from five healthcare services | The chatbot is a useful tool for hospital caregivers, providing reliable information on drugs and pharmacy organization. |
Cho et al. (2022) | South Korea | Prospective experimental study | Triage data | RMIS-AI speeds up triage tasks compared to manual methods but requires technical refinement and further research for reliable adoption. |
Shang (2021) | Canada | Scoping review | Collected articles | This conceptual analysis emphasizes the need to distinguish between robots, CDSS, and AI, proposing AI evaluation in nursing based on its impact on patients, nurses, and organizations. |
DISCUSSION OF RESULTS
Despite the relatively recent application of AI in healthcare, its impact has generated a substantial and rapidly evolving body of evidence highlighting both risks and opportunities. Technology should not be viewed as a neutral instrument or a threat to humanized care, but rather as a tool that expands and redefines human capabilities by integrating technological advancements with essential human skills in nursing care (Monterroza Ríos et al., 2015). In this context, the notion of technocare emerges as a central axis for balancing the technical and human dimensions of care (García-Uribe et al., 2024).
The evidence indicates that AI is not without risks, which calls for a prudent and reflective approach to its applications in nursing. An Aristotelian perspective may be useful in finding a balanced position regarding the use of these technologies in healthcare. Issues such as inequality—particularly notable in the scarcity of studies from several Latin American countries, reflecting disparities in AI access—and automation hold special significance in already inequitable geopolitical contexts with major social challenges. The risks associated with technological dependence and the dehumanization of care require in-depth analysis, especially since humans have become “techno-persons” (Fazakarley et al., 2024; Huqh et al., 2022; Lee, Wu et al., 2021; Mi, 2022). Thus, care in the 21st century cannot be conceived devoid of technology.
Nonetheless, the results of this review also highlight the opportunities AI offers to transform nursing care. The automation of routine tasks, such as clinical monitoring and administrative management, can free up time for professionals to focus on activities that require essential human qualities (Noorbakhsh-Sabet et al., 2019). Additionally, AI’s predictive capabilities have proven valuable for anticipating complications and personalizing interventions, thereby improving clinical outcomes and enhancing the overall quality of care (Ng et al., 2022; Gosak et al., 2022).
A conscious examination of the applications and implications of AI in nursing leads to the recognition of both its potential and risks, calling for an Aristotelian balance between caution and exploration. In this regard, scholars such as Maliandi (Salerno, 2016) argue that new technologies may create biotechnological ethical dilemas. Exploring the principles proposed by this author—precaution, exploration, non-discrimination, and respect for diversity—may be a valuable framework for situating a prudent balance in relation to the risks and opportunities posed by AI in nursing care.
It is important to note that AI is evolving at an extraordinary pace, and the information presented in this article may quickly become outdated. Consequently, maintaining an open and flexible perspective on these advancements will be crucial not only to mitigate implementation risks but also to harness AI’s potential for transforming nursing practice toward more efficient, equitable, and human-centered care.
CONCLUSIONS
AI holds significant potential to redefine nursing care, from improving operational efficiency to optimizing interdisciplinary collaboration. Although ethical concerns have been identified, the responsible adoption of AI could substantially enhance the quality of care and clinical outcomes.
To ensure that the implementation of AI in nursing is both effective and ethical, it is essential to establish ethical guidelines and regulatory frameworks that prioritize data protection, transparency, and respect for patient dignity. This would allow AI to become a reliable and equitable tool within healthcare systems. Future research should focus on evaluating strategies to minimize risks and maximize opportunities, ensuring that AI implementation contributes meaningfully to progress in this technological era.
REFERENCES
Abbasgholizadeh Rahimi, S., Légaré, F., Sharma, G., Archambault, P., Zomahoun, H. T. V., Chandavong, S., Rheault, N., T Wong, S., Langlois, L., Couturier, Y., Salmeron, J. L., Gagnon, M.-P., & Légaré, J. (2021). Application of Artificial Intelligence in Community-Based Primary Health Care: Systematic Scoping Review and Critical Appraisal. Journal of Medical Internet Research, 23(9), e29839. https://doi.org/10.2196/29839
Abràmoff, M. D., Tarver, M. E., Loyo-Berrios, N., Trujillo, S., Char, D., Obermeyer, Z., Eydelman, M. B., Foundational Principles of Ophthalmic Imaging and Algorithmic Interpretation Working Group of the Collaborative Community for Ophthalmic Imaging Foundation, & Maisel, W. H. (2023). Considerations for addressing bias in artificial intelligence for health equity. NPJ Digital Medicine, 6(170). https://doi.org/10.1038/s41746-023-00913-9
Abuzaid, M. M., Elshami, W., & Fadden, S. M. (2022). Integration of artificial intelligence into nursing practice. Health and Technology, 12(6), 1109-1115. https://doi.org/10.1007/s12553-022-00697-0
Baethge, C., Goldbeck-Wood, S., & Mertens, S. (2019). SANRA—a scale for the quality assessment of narrative review articles. Research Integrity and Peer Review, 4(5). https://doi.org/10.1186/s41073-019-0064-8
Barsasella, D., Bah, K., Mishra, P., Uddin, M., Dhar, E., Suryani, D. L., Setiadi, D., Masturoh, I., Sugiarti, I., Jonnagaddala, J., & Syed-Abdul, S. (2022). A Machine Learning Model to Predict Length of Stay and Mortality among Diabetes and Hypertension Inpatients. Medicina, 58(11), 1568. https://doi.org/10.3390/medicina58111568
Barwise, A. K., Curtis, S., Diedrich, D. A., & Pickering, B. W. (2024). Using artificial intelligence to promote equitable care for inpatients with language barriers and complex medical needs: Clinical stakeholder perspectives. Journal of the American Medical Informatics Association, 31(3), 611-621. https://doi.org/10.1093/jamia/ocad224
Butcher, N. J., Monsour, A., Mew, E. J., Chan, A.-W., Moher, D., Mayo-Wilson, E., Terwee, C. B., Chee-A-Tow, A., Baba, A., Gavin, F., Grimshaw, J. M., Kelly, L. E., Saeed, L., Thabane, L., Askie, L., Smith, M., Farid-Kapadia, M., Williamson, P. R., Szatmari, P., Tugwell, P., Golub, R. M., Monga, S., Vohra, S., Marlin, S., Ungar, W. J., & Offringa, M. (2022). Guidelines for reporting outcomes in trial reports: The CONSORT-Outcomes 2022 extension. JAMA, 328(22), 2252-2264. https://doi.org/10.1001/jama.2022.21022
Cai, Y., Clinto, M., & Xiao, Z. (2021). Artificial Intelligence Assistive Technology in Hospital Professional Nursing Technology. Journal of Healthcare Engineering, 2021, 1-7. https://doi.org/10.1155/2021/1721529
Chen, Y., Moreira, P., Liu, W., Monachino, M., Nguyen, T. L. H., & Wang, A. (2022). Is there a gap between artificial intelligence applications and priorities in health care and nursing management? Journal of Nursing Management, 30(8), 3736-3742. https://doi.org/10.1111/jonm.13851
Chen, Y.-H., & Xu, J.-L. (2023). Applying artificial intelligence to predict falls for inpatient. Frontiers in Medicine, 10. https://doi.org/10.3389/fmed.2023.1285192
Cheng, W., Pien, L., & Cheng, Y. (2021). Occupation‐level automation probability is associated with psychosocial work conditions and workers’ health: A multilevel study. American Journal of Industrial Medicine, 64(2), 108-117. https://doi.org/10.1002/ajim.23210
Chew, H. S. J. (2022). The Use of Artificial Intelligence–Based Conversational Agents (Chatbots) for Weight Loss: Scoping Review and Practical Recommendations. JMIR Medical Informatics, 10(4), e32578. https://doi.org/10.2196/32578
Cho, A., Min, I. K., Hong, S., Chung, H. S., Lee, H. S., & Kim, J. H. (2022). Effect of Applying a Real-Time Medical Record Input Assistance System with Voice Artificial Intelligence on Triage Task Performance in the Emergency Department: Prospective Interventional Study. JMIR Medical Informatics, 10(8), e39892. https://doi.org/10.2196/39892
Cohen, J. F., Korevaar, D. A., Altman, D. G., Bruns, D. E., Gatsonis, C. A., Hooft, L., Irwig, L., Levine, D., Reitsma, J. B., de Vet, H. C. W., & Bossuyt, P. M. M. (2016). STARD 2015 guidelines for reporting diagnostic accuracy studies: Explanation and elaboration. BMJ Open, 6(11), e012799. https://doi.org/10.1136/bmjopen-2016-012799
Collins, G. S., Reitsma, J. B., Altman, D. G., & Moons, K. G. M. (2015). Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement. Annals of Internal Medicine, 162(1), 55-63. https://doi.org/10.7326/M14-0697
Cuschieri, S. (2019). The STROBE guidelines. Saudi Journal of Anesthesia, 13(1 sup.), 31-34. https://doi.org/10.4103/sja.SJA_543_18
Daniel, T., De Chevigny, A., Champrigaud, A., Valette, J., Sitbon, M., Jardin, M., Chevalier, D., & Renet, S. (2022). Answering Hospital Caregivers’ Questions at Any Time: Proof-of-Concept Study of an Artificial Intelligence–Based Chatbot in a French Hospital. JMIR Human Factors, 9(4), e39102. https://doi.org/10.2196/39102
David, L., Popa, S., Barsan, M., Muresan, L., Ismaiel, A., Popa, L., Perju‑Dumbrava, L., Stanculete, M., & Dumitrascu, D. (2021). Nursing procedures for advanced dementia: Traditional techniques versus autonomous robotic applications (Review). Experimental and Therapeutic Medicine, 23(2), 124. https://doi.org/10.3892/etm.2021.11047
Di, S., Petch, J., Gerstein, H. C., Zhu, R., & Sherifali, D. (2022). Optimizing Health Coaching for Patients With Type 2 Diabetes Using Machine Learning: Model Development and Validation Study. JMIR Formative Research, 6(9), e37838. https://doi.org/10.2196/37838
Elyoseph, Z., Refoua, E., Asraf, K., Lvovsky, M., Shimoni, Y., & Hadar-Shoval, D. (2024). Capacity of Generative AI to Interpret Human Emotions From Visual and Textual Data: Pilot Evaluation Study. JMIR Mental Health, 11, e54369. https://doi.org/10.2196/54369
Fazakarley, C.-A., Breen, M., Thompson, B., Leeson, P., & Williamson, V. (2024). Beliefs, experiences and concerns of using artificial intelligence in healthcare: A qualitative synthesis. Digital Health, 10. https://doi.org/10.1177/20552076241230075
Fralick, M., Dai, D., Pou‐Prom, C., Verma, A. A., & Mamdani, M. (2021). Using machine learning to predict severe hypoglycaemia in hospital. Diabetes, Obesity and Metabolism, 23(10), 2311-2319. https://doi.org/10.1111/dom.14472
García Uribe, J. C. (2020). Cuidar del cuidado: Ética de la compasión, más allá de la protocolización del cuidado de enfermería. Cultura de los Cuidados, 24(57), 52-60. https://doi.org/10.14198/cuid.2020.57.05
García-Uribe, J. C., Arteaga-Noriega, A. V., & Bedoya-Carvajal, O. A. (2024). La técnica y el cuidado de enfermería: Entre deshumanización y tecnificación. Trilogía Ciencia Tecnología Sociedad, 16(32), e2996. https://doi.org/10.22430/21457778.2996
Gosak, L., Martinović, K., Lorber, M., & Stiglic, G. (2022). Artificial intelligence based prediction models for individuals at risk of multiple diabetic complications: A systematic review of the literature. Journal of Nursing Management, 30(8), 3765-3776. https://doi.org/10.1111/jonm.13894
Haenlein, M., & Kaplan, A. (2019). A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review, 61(4), 5-14. https://doi.org/10.1177/0008125619864925
Harmon, J., Pitt, V., Summons, P., & Inder, K. J. (2021). Use of artificial intelligence and virtual reality within clinical simulation for nursing pain education: A scoping review. Nurse Education Today, 97, 104700. https://doi.org/10.1016/j.nedt.2020.104700
He, X., Zheng, X., & Ding, H. (2023). Existing Barriers Faced by and Future Design Recommendations for Direct-to-Consumer Health Care Artificial Intelligence Apps: Scoping Review. Journal of Medical Internet Research, 25, e50342. https://doi.org/10.2196/50342
Hsu, C.-C., Kao, Y., Hsu, C.-C., Chen, C.-J., Hsu, S.-L., Liu, T.-L., Lin, H.-J., Wang, J.-J., Liu, C.-F., & Huang, C.-C. (2023). Using artificial intelligence to predict adverse outcomes in emergency department patients with hyperglycemic crises in real time. BMC Endocrine Disorders, 23, 234. https://doi.org/10.1186/s12902-023-01437-9
Huqh, M. Z. U., Abdullah, J. Y., Wong, L. S., Jamayet, N. B., Alam, M. K., Rashid, Q. F., Husein, A., Ahmad, W. M. A. W., Eusufzai, S. Z., Prasadh, S., Subramaniyan, V., Fuloria, N. K., Fuloria, S., Sekar, M., & Selvaraj, S. (2022). Clinical Applications of Artificial Intelligence and Machine Learning in Children with Cleft Lip and Palate—A Systematic Review. International Journal of Environmental Research and Public Health, 19(17), 10860. https://doi.org/10.3390/ijerph191710860
Hwang, Y. J., Kim, G. H., Kim, M. J., & Nam, K. W. (2023). Deep learning-based monitoring technique for real-time intravenous medication bag status. Biomedical Engineering Letters, 13(4), 705-714. https://doi.org/10.1007/s13534-023-00292-w
Ibuki, T., Ibuki, A., & Nakazawa, E. (2024). Possibilities and ethical issues of entrusting nursing tasks to robots and artificial intelligence. Nursing Ethics, 31(6), 1010-1020. https://doi.org/10.1177/09697330221149094
Jia, Q., Wang, X., Zhou, R., Ma, B., Fei, F., & Han, H. (2023). Systematic bibliometric and visualized analysis of research hotspots and trends in artificial intelligence in autism spectrum disorder. Frontiers in Neuroinformatics, 17, 1310400. https://doi.org/10.3389/fninf.2023.1310400
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang, Y. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230-243. https://doi.org/10.1136/svn-2017-000101
Lee, J., Wu, A. S., Li, D., & Kulasegaram, K. (Mahan). (2021). Artificial Intelligence in Undergraduate Medical Education: A Scoping Review. Academic Medicine, 96(11 sup.), 62-70. https://doi.org/10.1097/acm.0000000000004291
Lee, S.-K., Shin, J. H., Ahn, J., Lee, J. Y., & Jang, D. E. (2021). Identifying the Risk Factors Associated with Nursing Home Residents’ Pressure Ulcers Using Machine Learning Methods. International Journal of Environmental Research and Public Health, 18(6), 2954. https://doi.org/10.3390/ijerph18062954
Lei, L., Zhang, S., Yang, L., Yang, C., Liu, Z., Xu, H., Su, S., Wan, X., & Xu, M. (2023). Machine learning-based prediction of delirium 24 h after pediatric intensive care unit admission in critically ill children: A prospective cohort study. International Journal of Nursing Studies, 146, 104565. https://doi.org/10.1016/j.ijnurstu.2023.104565
Li, H., Lu, Y., Zeng, X., Fu, C., Duan, H., Shu, Q., & Zhu, J. (2021). Prediction of central venous catheter-associated deep venous thrombosis in pediatric critical care settings. BMC Medical Informatics and Decision Making, 21(332). https://doi.org/10.1186/s12911-021-01700-w
Li, J., Hu, S., Shi, C., Dong, Z., Pan, J., Ai, Y., Liu, J., Zhou, W., Deng, Y., Li, Y., Yuan, J., Zeng, Z., Wu, L., & Yu, H. (2022). A deep learning and natural language processing-based system for automatic identification and surveillance of high-risk patients undergoing upper endoscopy: A multicenter study. eClinicalMedicine, 53, 101704. https://doi.org/10.1016/j.eclinm.2022.101704
Liu, J., Kong, X., Xia, F., Bai, X., Wang, L., Qing, Q., & Lee, I. (2018). Artificial Intelligence in the 21st Century. IEEE Access, 6, 34403-34421. https://doi.org/10.1109/ACCESS.2018.2819688
Liu, S., Zhang, F., Xie, L., Wang, Y., Xiang, Q., Yue, Z., Feng, Y., Yang, Y., Li, J., Luo, L., & Yu, C. (2019). Machine learning approaches for risk assessment of peripherally inserted Central catheter-related vein thrombosis in hospitalized patients with cancer. International Journal of Medical Informatics, 129, 175-183. https://doi.org/10.1016/j.ijmedinf.2019.06.001
López Baroni, M. J. (2019). Las narrativas de la inteligencia artificial. Revista de Bioética y Derecho, (46), 5-28. https://doi.org/10.1344/rbd2019.0.27280
Masoumian Hosseini, M., Masoumian Hosseini, S. T., Qayumi, K., Ahmady, S., & Koohestani, H. R. (2023). The Aspects of Running Artificial Intelligence in Emergency Care; a Scoping Review. Archives of Academic Emergency Medicine, 11(1), e38. https://doi.org/10.22037/aaem.v11i1.1974
McIntosh, C., Elvin, A., Smyth, W., Birks, M., & Nagle, C. (2022). Health Promotion, Health Literacy and Vaccine Hesitancy: The Role of Humanoid Robots. INQUIRY: The Journal of Health Care Organization, Provision, and Financing, 59. https://doi.org/10.1177/00469580221078515
Mi, J. (2022). Deep Learning‐Based Digital Subtraction Angiography Characteristics in Nursing of Maintenance Hemodialysis Patients. Contrast Media & Molecular Imaging, 2022(1), 356108. https://doi.org/10.1155/2022/9356108
Molina Gómez, A. M., Roque Roque, L., Garcés Garcés, B. R., Rojas Mesa, Y., Dulzaides Iglesias, M. E., & Selín Ganén, M. (2015). El proceso de comunicación mediado por las tecnologías de la información. Ventajas y desventajas en diferentes esferas de la vida social. MediSur, 13(4), 481-493. http://www.medisur.sld.cu/index.php/medisur/article/view/3075
Monterroza Ríos, Á., Escobar, J. M., & Mejía Escobar, J. A. (2015). Por una revaloración de la filosofía de la técnica. Un argumento a favor del rol cultural de la técnica. Revista Iberoamericana de Ciencia, Tecnología y Sociedad-CTS, 10(30), 265-275. https://doi.org/10.52712/issn.1850-0013-502
Mousa, K. M., Mousa, F. A., Mohamed, H. S., & Elsawy, M. M. (2023). Prediction of Foot Ulcers Using Artificial Intelligence for Diabetic Patients at Cairo University Hospital, Egypt. SAGE Open Nursing, 9. https://doi.org/10.1177/23779608231185873
Munn, Z., Moola, S., Riitano, D., & Lisy K. (2014). The development of a critical appraisal tool for use in systematic reviews addressing questions of prevalence. International Journal of Health Policy and Management, 3(3), 123-128. https://doi.org/10.15171/ijhpm.2014.71
Ng, Z. Q. P., Ling, L. Y. J., Chew, H. S. J., & Lau, Y. (2022). The role of artificial intelligence in enhancing clinical nursing care: A scoping review. Journal of Nursing Management, 30(8), 3654-3674. https://doi.org/10.1111/jonm.13425
Noorbakhsh-Sabet, N., Zand, R., Zhang, Y., & Abedi, V. (2019). Artificial Intelligence Transforms the Future of Health Care. The American Journal of Medicine, 132(7), 795-801. https://doi.org/10.1016/j.amjmed.2019.01.017
Odhiambo, C. O., Ablonczy, L., Wright, P. J., Corbett, C. F., Reichardt, S., & Valafar, H. (2023). Detecting Medication-Taking Gestures Using Machine Learning and Accelerometer Data Collected via Smartwatch Technology: Instrument Validation Study. JMIR Human Factors, 10, e42714. https://doi.org/10.2196/42714
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372(71). https://doi.org/10.1136/bmj.n71
Pei, J., Guo, X., Tao, H., Wei, Y., Zhang, H., Ma, Y., & Han, L. (2023). Machine learning‐based prediction models for pressure injury: A systematic review and meta‐analysis. International Wound Journal, 20(10), 4328-4339. https://doi.org/10.1111/iwj.14280
Romero-Brufau, S., Wyatt, K. D., Boyum, P., Mickelson, M., Moore, M., & Cognetta-Rieke, C. (2020). Implementation of Artificial Intelligence-Based Clinical Decision Support to Reduce Hospital Readmissions at a Regional Hospital. Applied Clinical Informatics, 11(4), 570-577. https://doi.org/10.1055/s-0040-1715827
Ronquillo, C. E., Mitchell, J., Alhuwail, D., Peltonen, L.-M., Topaz, M., & Block, L. J. (2022). The Untapped Potential of Nursing and Allied Health Data for Improved Representation of Social Determinants of Health and Intersectionality in Artificial Intelligence Applications: A Rapid Review: IMIA Student and Emerging Professionals Group. Yearbook of Medical Informatics, 31(1), 94-99. https://doi.org/10.1055/s-0042-1742504
Salerno, G. M. (2016). Panorama de la ética convergente de Ricardo Maliandi. Eidos, (25), 73-94. https://doi.org/10.14482/eidos.25.7793
Schütz, N., Knobel, S. E. J., Botros, A., Single, M., Pais, B., Santschi, V., Gatica-Perez, D., Buluschek, P., Urwyler, P., Gerber, S. M., Müri, R. M., Mosimann, U. P., Saner, H., & Nef, T. (2022). A systems approach towards remote health-monitoring in older adults: Introducing a zero-interaction digital exhaust. Npj Digital Medicine, 5, 116. https://doi.org/10.1038/s41746-022-00657-y
Seibert, K., Domhoff, D., Bruch, D., Schulte-Althoff, M., Fürstenau, D., Biessmann, F., & Wolf-Ostermann, K. (2021). Application Scenarios for Artificial Intelligence in Nursing Care: Rapid Review. Journal of Medical Internet Research, 23(11), e26522. https://doi.org/10.2196/26522
Shang, Z. (2021). A Concept Analysis on the Use of Artificial Intelligence in Nursing. Cureus, 13(5), e14857. https://doi.org/10.7759/cureus.14857
Somashekhar, S. P., Sepúlveda, M.-J., Puglielli, S., Norden, A. D., Shortliffe, E. H., Kumar, C. R., Rauthan, A., Kumar, N. A., Patil, P., Rhee, K., & Ramya, Y. (2018). Watson for Oncology and breast cancer treatment recommendations: Agreement with an expert multidisciplinary tumor board. Annals of Oncology, 29(2), 418-423. https://doi.org/10.1093/annonc/mdx781
Stahl, B. C., Antoniou, J., Bhalla, N., Brooks, L., Jansen, P., Lindqvist, B., Kirichenko, A., Marchal, S., Rodrigues, R., Santiago, N., Warso, Z., & Wright, D. (2023). A systematic review of artificial intelligence impact assessments. Artificial Intelligence Review, 56(11), 12799-12831. https://doi.org/10.1007/s10462-023-10420-8
Stokes, F., & Palmer, A. (2020). Artificial Intelligence and Robotics in Nursing: Ethics of Caring as a Guide to Dividing Tasks Between AI and Humans. Nursing Philosophy, 21(4), e12306. https://doi.org/10.1111/nup.12306
Sunarti, S., Fadzlul Rahman, F., Naufal, M., Risky, M., Febriyanto, K., & Masnina, R. (2021). Artificial intelligence in healthcare: Opportunities and risk for future. Gaceta Sanitaria, 35(1), 67-70. https://doi.org/10.1016/j.gaceta.2020.12.019
Sung, J. J., Stewart, C. L., & Freedman, B. (2020). Artificial intelligence in health care: Preparing for the fifth Industrial Revolution. Medical Journal of Australia, 213(6), 253-255. https://doi.org/10.5694/mja2.50755
Susanty, S., Sufriyana, H., Su, E. C.-Y., & Chuang, Y.-H. (2023). Questionnaire-free machine-learning method to predict depressive symptoms among community-dwelling older adults. PLOS ONE, 18(1), e0280330. https://doi.org/10.1371/journal.pone.0280330
Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19(6), 349-357. https://doi.org/10.1093/intqhc/mzm042
Topaz, M., Woo, K., Ryvicker, M., Zolnoori, M., & Cato, K. (2020). Home Healthcare Clinical Notes Predict Patient Hospitalization and Emergency Department Visits. Nursing Research, 69(6), 448-454. https://doi.org/10.1097/NNR.0000000000000470
Tran, B. X., Latkin, C. A., Sharafeldin, N., Nguyen, K., Vu, G. T., Tam, W. W. S., Cheung, N.-M., Nguyen, H. L. T., Ho, C. S. H., & Ho, R. C. M. (2019). Characterizing Artificial Intelligence Applications in Cancer Research: A Latent Dirichlet Allocation Analysis. JMIR Medical Informatics, 7(4), e14401. https://doi.org/10.2196/14401
Tricco, A. C., Lillie, E., Zarin, W., O’Brien, K. K., Colquhoun, H., Levac, D., Moher, D., Peters, M. D. J., Horsley, T., Weeks, L., Hempel, S., Akl, E. A., Chang, C., McGowan, J., Stewart, L., Hartling, L., Aldcroft, A., Wilson, M. G., Garritty, C., … Straus, S. E. (2018). PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Annals of Internal Medicine, 169(7), 467-473. https://doi.org/10.7326/M18-0850
Von Hippel, W., & Suddendorf, T. (2018). Did humans evolve to innovate with a social rather than technical orientation? New Ideas in Psychology, 51, 34-39. https://doi.org/10.1016/j.newideapsych.2018.06.002
Zhan, T., Goyal, D., Guttag, J., Mehta, R., Elahi, Z., Syed, Z., & Saeed, M. (2020). Machine intelligence for early targeted precision management and response to outbreaks of respiratory infections. The American Journal of Managed Care, 26(10), 445-448. https://doi.org/10.37765/ajmc.2020.88456
Zhang, M., Zhu, L., Lin, S.-Y., Herr, K., Chi, C.-L., Demir, I., Dunn Lopez, K., & Chi, N.-C. (2023). Using artificial intelligence to improve pain assessment and pain management: A scoping review. Journal of the American Medical Informatics Association, 30(3), 570-587. https://doi.org/10.1093/jamia/ocac231
Zidaru, T., Morrow, E. M., & Stockley, R. (2021). Ensuring patient and public involvement in the transition to ai‐assisted mental health care: A systematic scoping review and agenda for design justice. Health Expectations, 24(4), 1072-1124. https://doi.org/10.1111/hex.13299
Zrínyi, M., Pakai, A., Lampek, K., Vass, D., Siket Újváriné, A., Betlehem, J., & Oláh, A. (2022). Nurse preferences of caring robots: A conjoint experiment to explore most valued robot features. Nursing Open, 10(1), 99-104. https://doi.org/10.1002/nop2.1282
Notes
The authors declare no financial, professional, or personal conflicts of interest that could have unduly influenced the results or interpretations presented in this article.
All authors contributed to the development of the conceptual ideas, the design of the conepts, the analytical reflections, and the drafting and final revision of this manuscript.
Información adicional
How to reference: Castrillón Isaza, K. A., Giraldo Restrepo, J. C., & García Uribe, J. C. (2025). Risks and Opportunities of Artificial Intelligence in Nursing Care: A Scoping Review. Trilogía Ciencia Tecnología Sociedad, 17(35), e3272. https://doi.org/10.22430/21457778.3272
Información adicional
redalyc-journal-id: 5343
Enlace alternativo
https://revistas.itm.edu.co/index.php/trilogia/article/view/3272 (html)