Servicios
Descargas
Buscar
Idiomas
P. Completa
The use of ChatGPT in scientific publishing
Paulo José Fortes Villas Boas; José Vitor Polachini do Valle Villas Boas
Paulo José Fortes Villas Boas; José Vitor Polachini do Valle Villas Boas
The use of ChatGPT in scientific publishing
O uso do ChatGPT na publicação científica
Geriatrics, Gerontology and Aging, vol. 17, e0230027, 2023
Sociedade Brasileira de Geriatria e Gerontologia, SBGG
resúmenes
secciones
referencias
imágenes

Abstract: The use of Generative Pretrained Transformer (ChatGPT), an artificial intelligence tool, for writing scientific articles has been reason for discussion by the academic community ever since its launch in late 2022. This artificial intelligence technology is becoming capable of generating fluent language, and distinguishing between text produced by ChatGPT and that written by people is becoming increasingly difficult. Here, we will present some topics to be discussed: (1) ensuring human verification; (2) establishing accountability rules; (3) avoiding the automatization of scientific production; (4) favoring truly open-source large language models (LLMs); (5) embracing the benefits of artificial intelligence; and (6) broadening the debate. With the emergence of these technologies, it is crucial to regulate, with continuous updates, the development and responsible use of LLMs with integrity, transparency, and honesty in research, along with scientists from various areas of knowledge, technology companies, large research funding bodies, science academies and universities, editors, non-governmental organizations, and law experts.

Keywords: Artificial intelligence, research, authorship, ethics in scientific publishing.

Resumo: O uso do Generative Pretrained Transformer (ChatGPT), ferramenta de inteligência artificial, na redação de artigos científicos, tem sido motivo de discussão pela comunidade acadêmica desde seu lançamento, no fim de 2022. Essa tecnologia de inteligência artificial está ganhando a capacidade de gerar linguagem fluente, sendo cada vez mais difícil distingui-la dos textos escritos por pessoas. Serão apresentados alguns aspectos para serem debatidos: (1) assegurar a verificação humana; (2) desenvolver regras de responsabilidade; (3) evitar a automatização da produção científica; (4) dar preferência a grandes modelos de linguagem verdadeiramente (LLMs) abertos; (5) abraçar os benefícios da IA; e (6) ampliar o debate. Com o surgimento dessas tecnologias, faz-se necessário regulamentar, com atualização contínua, o desenvolvimento e o uso responsável dos LLMs com integridade, transparência e honestidade na pesquisa, com participação de cientistas de diversas disciplinas, empresas de tecnologia, grandes financiadores de pesquisas, academias de ciências e universidades, editores, organizações não governamentais (ONGs) e especialistas jurídicos.

Palavras-Chaves: Inteligência artificial, pesquisa, autoria, ética na publicação científica.

Carátula del artículo

VIEWPOINT

The use of ChatGPT in scientific publishing

O uso do ChatGPT na publicação científica

Paulo José Fortes Villas Boas
Universidade Estadual Paulista, Brazil
José Vitor Polachini do Valle Villas Boas
Pontifícia Universidade Católica de São Paulo, Brazil
Geriatrics, Gerontology and Aging, vol. 17, e0230027, 2023
Sociedade Brasileira de Geriatria e Gerontologia, SBGG

Received: 16 April 2023

Accepted: 12 June 2023

Artificial intelligence (AI) has been a subject of discussion by the academic community owing to the possibility of using Generative Pretrained Transformer (ChatGPT) for writing scientific articles.

ChatGPT was launched in open access in late 2022 by Open AI, a North American nonprofit company (https://openai.com/blog/chatgpt). It is a technology named chatbot, a language classified as large language models (LLMs), trained on a large dataset of manuscripts. Its possible uses include generating texts that are similar to those produced by humans and computer codes, as well as editing articles, even formulating answers.1

In science, it is capable of generating fluent language, producing phrases that are hard to distinguish from those written by humans. In late 2022, Nature journal informed that scientists were already using chatbots as research assistants for organizing their thoughts and summarizing the scientific literature.2

Scientific journals such as Nature and JAMA, according to the principles of research (transparency in methods and integrity and honesty by the authors), limited the use of ChatGPT.3,4 Based on these requisites, considered essential for science to move forward, research should be open and transparent on its methods and evidence, regardless of the methodology used. The editors should inquire if the transparency and reliability of the knowledge-generating process were maintained or if the authors used software that functions in a non-transparent manner.3

A currently questioned aspect is whether the editors can notice that a submitted text was generated by LLMs. At this moment, the answer to this question is “maybe,” as this is distinguishable after careful inspection, especially when related to scientific publications. This happens because LLMs operate via word patterns based on statistical associations of databases; another aspect is the absence of citations in the generated documents. This will probably be overcome soon with the incorporation of reference citation tools.3

Another issue is whether ChatGPT can be considered an author. The current AI chatbots are still at the level of search engines. According to the International Committee of Medical Journal Editors (ICMJE), manuscript authorship must be based on four criteria:5

  • Substantial contributions to the conception or design of the work.

  • Drafting or critically reviewing the study.

  • Final approval of the version to be published.

  • Agreement to be accountable for all aspects related to the accuracy and integrity of the manuscript.

LLMs cannot take responsibility for their writing and therefore cannot be considered authors from the viewpoint of research ethics.6,7 This is the position of the World Association of Medical Editors (WAME). In May 2023, WAME recommended that chatbots cannot be authors, as they cannot approve the final version to be published or be accountable for aspects of the work,8 highlighting that authors are responsible for the material provided to the manuscript by the chatbot, including the absence of plagiarism and original sources.

In ethical matters, one should consider the risk of plagiarism and imprecisions, in addition to a possible unbalance in accessibility in high- and low-income countries if the software becomes paid.9 Another ethical aspect — and a challenge to the use of AI — concerns ageism, defined as the situation where age is used to categorize and divide people so as to cause harm, disadvantages, and injustice, and erode solidarity across generations.10 This prejudice can also be replicated by ChatGPT in scientific publications.11 This aspect is not clear and should be the object of future scientific research.

These AI tools will probably revolutionize research practices and publication, creating opportunities, accelerating the innovation process, reducing time to publication and, as they help people write fluently, make science more egalitarian and increase diversity in scientific perspectives. On the other hand, they may negatively influence research quality and transparency and affect the autonomy of human researchers.

Another negative aspect concerns the fact that ChatGPT frequently produces incorrect texts. According to Sam Altman, chief executive officer at OpenAI, ChatGPT is incredibly limited, but sufficiently good at some tasks to create a misleading impression of greatness,12 being capable of distorting scientific facts and spread misinformation.13

The use of this technology is inevitable, and banning it will not respond to the questioning raised here; the scientific community should discuss its implications, and we suggest some points to this debate:13

1. Ensuring human verification:

Supposing researchers use AI in their publications, verification and checking should be indispensable and strict. For this, journals should include human verification or even ban certain applications that use this technology.

2. Establishing accountability rules:

The authors should remain accountable for scientific practice. As the current detection methods will probably soon be overcome by advanced AI technologies, editors should ask the authors to attest that publication policies were observed. Author contribution statements and acknowledgments in research works should clearly state whether AI technologies were used and in which phase of the study, allowing editors and reviewers to examine manuscripts more thoroughly in search of biases, imprecisions, and miscrediting. This way, sci-entific journals shall be transparent about the use of LLMs for selecting manuscripts submitted for publication.

3. Avoiding the automatization of scientific production:

The automatization of scientific writing is among the risks of LLMs, excluding the human dimension. ChatGPT does not produce anything; it only reproduces what it can abstract from the wide network. Nevertheless, its use can speed up the process of scientific discovery, generating hypotheses or ideas for experiments, discovering patterns and connections between existing data, and helping to identify gaps in the existing knowledge.

With increased processing speed and capacity, AI could thus be responsible for more mechanical and laborious tasks which require time from a researcher, such as drafting results. If the scientist is the thinking mind, AI would participate as the workforce.

4. Favoring truly open-source LLMs:

Overall, next-generation AI technologies belong to a small number of technology companies; OpenAI, for example, is largely funded by Microsoft. These shares will lead to a monopoly in search engines and text processors, raising considerable ethical concerns.

The lack of transparency, as the subjacent training datasets and LLMs for ChatGPT and its predecessors are not public, goes against the open science and transparency movement, hindering the discovery of gaps or the origin of chatbot knowledge.14

The development and implementation of open-source AI technology must be prioritized, with investments in non-profit projects by non-commercial entities such as universities, government scientific funding bodies, NGOs, entities such as the United Nations, and tech giants. These partnerships should help in the development of advanced, open-source AI technologies which are transparent and democratically controlled, enabling the disruption of the hegemony of large technology companies and making knowledge acquisition and production more accessible.

5. Embracing the benefits of AI:

Chatbots reduce the time for concluding tasks and publishing research results, freeing academics up for new projects and thus accelerating innovation with advances in various scientific fields.

AI has great potential as long as the current problems related to biases and imprecisions are solved, promoting the validity and reliability of LLMs and allowing researchers to properly use this technology for scientific writing.

Therefore, it is vital to discuss the potential conflict between the acceleration of knowledge production and the reduction in human participation and autonomy generated by the use of this tool in the research process.

6. Broadening the debate:

Given the disruptive potential of LLMs, the scientific community should urgently organize a comprehensive debate.

Authors recommend that research groups discuss and use ChatGPT in the stages of scientific production.7,13 In this initial phase, in the absence of any regulation, the scientific community should determine its use with ethics, honesty, integrity, and transparency. All those involved should be reminded that they will be held accountable for their works, whether these are generated with ChatGPT or not.

In this discussion, approaching implications on diversity and inequalities in research is a fundamental issue. LLMs can level up scientific writing, removing language barriers and enabling more people to write high-quality texts. However, there is a possibility that high-income countries and privileged researchers rapidly find ways to explore LLMs in order to speed up their own research, increasing inequalities. Therefore, the debates should include underrepresented groups and communities affected by the research in order to use their experiences as an important resource.

In conclusion, people’s creativity and originality, education, training, and productive interactions will probably remain essential for relevant and innovative scientific writing.

International regulation is needed, with continuous updates on the development and responsible use of LLMs with integrity, transparency, and honesty in scientific research and writing. This discussion should include scientists from different areas, technology companies, research funding bodies, science academies and universities, editors, NGOs, and law experts.

Supplementary material
REFERÊNCIAS
Biswas S. ChatGPT and the future of medical writing. Radiology. 2023;307:e223312. https://doi.org/10.1148/radiol.223312
Hutson M. Could AI help you to write your next paper? Nature. 2022;611(7934):192-3. https://doi.org/10.1038/d41586-022-03479-w
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613(7945):612. https://doi.org/10.1038/d41586-023-00191-1
Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329(8):637-9. https://doi.org/10.1001/jama.2023.1344
International Committee of Medical Journal Editors. Defining the role of authors and contributors [Internet]. Available from: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html. Accessed on Mar 28, 2023.
Lee JY. Can an artificial intelligence chatbot be the author of a scholarly article? J Educ Eval Health Prof. 2023;20:6. https://doi.org/10.3352/jeehp.2023.20.6
Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613(7945):620-1. https://doi.org/10.1038/d41586-023-00107-z
Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, et al. Chatbots, generative AI, and scholarly manuscripts. WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. World Association of Medical Editors; 2023 [Internet]. Available from: https://wame.org/page3.php?id=106. Accessed on May 31, 2023
Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023;27(1):75. https://doi.org/10.1186/s13054-023-04380-2
Organização Mundial de Saúde. Relatório mundial sobre o idadismo. Organização Pan-Americana da Saúde; 2021 [Internet]. Available from: https://www.paho.org/pt/documentos/relatorio-mundial-sobre-idadismo. Acessado em Jun 11, 2023.
World Health Organization. Ageism in artificial intelligence for health [Internet]. World Health Organization; 2022. Available from: https://www.who.int/publications-detail-redirect/9789240040793. Accessed on Jun 11, 2023.
Bordoloi P. When ChatGPT attempted UPSC exam [Internet]. Analytics India Magazine; 2023. Available from: https://analyticsindiamag.com/when-chatgpt-attempted-upsc-exam/. Accessed on Mar 28, 2023.
van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. Chatgpt: five priorities for research. Nature. 2023;614(7947):224-6. https://doi.org/10.1038/d41586-023-00288-7
Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1(5):206-15. https://doi.org/10.1038/s42256-019-0048-x
Notes
Notes
Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Conflict of interest declaration
Conflicts of interest

The authors declare no conflicts of interest.

Author notes
Associate editor: Patrick Alexander Wachholz

Address for correspondence Paulo José Fortes Villas Boas – Rua General Telles, 1519 – CEP 18602-120 – Botucatu (SP), Brasil. E-mail: paulo.boas@unesp.br

Buscar:
Contexto
Descargar
Todas
Imágenes
Scientific article viewer generated from XML JATS4R by Redalyc