1 Kharkiv Regional Public Organization “Culture of Health”, Ukraine 2 Scientific Research Institute KRPOCH, Ukraine 3 Educational Center KRPOCH, Ukraine |
Abstract
Background and Aim of Study: Developing and using ChatBots based on artificial intelligence (AI) has raised issues about their legitimacy in scientific research. Authors have increasingly begun to use AI tools, but their role in scientific publications remains unrecognized. In addition, there are still no accepted norms for the use of ChatBots, and there are no rules for how to cite them when writing a scientific paper.
The aim of the study: to consider the main issues related to the use of AI that arise for authors and publishers when preparing scientific publications for publication; to develop a basic logo that reflects the role and level of involvement of the AI and the specific ChatBots in a particular study.
Results: We offer the essence of the definition “Human-AI System”. This plays an important role in the structure of scientific research in the study of this new phenomenon. In exploring the legitimacy of using AI-based ChatBots in scientific research, we offer a method for indicating AI involvement and the role of ChatBots in a scientific publication. A specially developed base logo is visually easy to perceive and can be used to indicate ChatBots’ involvement and contributions to the paper for publication.
Conclusions: The existing positive aspects of using ChatBots, which greatly simplify the process of preparing and writing scientific publications, may far outweigh the small inaccuracies they may allow. In this Editorial, we invite authors and publishers to discuss the issue of the legitimacy we give to AI, and the need to define the role and contribution that ChatBots can make to scientific publication.
Keywords
ChatBot, artificial intelligence, Human-AI System, legitimacy, logo, scientific publication
References
Anderson, K. (2023, January 13). ChatGPT says it’s not an author. The Geyser. https://www.the-geyser.com/chatgpt-says-its-not-an-author/
Çalli, B. A., & Çalli, L. (2022). Understanding the utilization of artificial intelligence and robotics in the service sector. In S. B. Kahyaoğlu (Eds.), The Impact of Artificial Intelligence on Governance, Economics and Finance: Vol. 2. Accounting, Finance, Sustainability, Governance & Fraud: Theory and Application (pp. 243-263). Springer. https://doi.org/10.1007/978-981-16-8997-0_14
Carpenter, T. A. (2023, January 11). Thoughts on AI’s impact on scholarly communications? An interview with ChatGPT. The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2023/01/11/chatgpt-thoughts-on-ais-impact-on-scholarly-communications/
ChatGPT. (n. d.). Scopus Author ID: 58024851600 [Scopus Author Identifier]. Scopus. Retrieved April 01, 2023, from https://www.scopus.com/authid/detail.uri?authorId=58024851600&ref=the-geyser.com
Chechitelli, A. (2023, January 13). Sneak preview of Turnitin’s AI writing and ChatGPT detection capability. Turnitin. https://www.turnitin.com/blog/sneak-preview-of-turnitins-ai-writing-and-chatgpt-detection-capability
COPE. (2023, January 30). Artificial intelligence in the news. https://publicationethics.org/news/artificial-intelligence-news
COPE. (2023, February 13). Authorship and AI tools. COPE position statement. https://publicationethics.org/cope-position-statements/ai-author
COPE. (2023, February 23). Artificial intelligence and authorship. https://publicationethics.org/news/artificial-intelligence-and-authorship
COPE. (2023, March 23). Artificial intelligence (AI) and fake papers. https://publicationethics.org/resources/forum-discussions/artificial-intelligence-fake-paper
COPE Council. (2021, September). COPE Discussion document: Artificial intelligence (AI) in decision making – English. https://doi.org/10.24318/9kvAgrnJ
COPE. (n. d.). International Journal of Science Annals [COPE Members page]. COPE. Retrieved March 17, 2023, from https://publicationethics.org/members/international-journal-science-annals
Dans, E. (2019, February 6). Meet Bertie, Heliograf and Cyborg, the new journalists on the block. Forbes. https://www.forbes.com/sites/enriquedans/2019/02/06/meet-bertie-heliograf-and-cyborg-the-new-journalists-on-the-block/?sh=416c2163138d
Davis, P. (2023, January 13) Did ChatGPT just lie to me? The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2023/01/13/did-chatgpt-just-lie-to-me/
Dimitriadou, E., & Lanitis, A. (2023). A critical evaluation, challenges, and future perspectives of using artificial intelligence and emerging technologies in smart classrooms. Smart Learning Environments, 10, 12. https://doi.org/10.1186/s40561-023-00231-3
Farahani, M. S. (2023). Applications of artificial intelligence in social science issues: a case study on predicting population change. Journal of the Knowledge Economy. https://doi.org/10.1007/s13132-023-01270-4
Hern, A. (2022, December 31). AI-assisted plagiarism? ChatGPT bot says it has an answer for that. The Guardian. https://amp.theguardian.com/technology/2022/dec/31/ai-assisted-plagiarism-chatgpt-bot-says-it-has-an-answer-for-that
Hoffman, R. with GPT-4. (2023). Impromptu: Amplifying our humanity through AI. Dallepedia LLC. https://www.impromptubook.com/wp-content/uploads/2023/03/impromptu-rh.pdf
McAdoo, T. (2023, April 7). How to cite ChatGPT. APA. https://apastyle.apa.org/blog/how-to-cite-chatgpt
McCarthy, J. (1959). Programs with common sense. In Proceedings of the Teddington Conference on the Mechanization of Thought Processes, 756-791. Her Majesty’s Stationery Office. http://jmc.stanford.edu/articles/mcc59/mcc59.pdf
Melnyk, Yu. B., & Pypenko, I. S. (2021). Dilemma: Quality or quantity in scientific periodical publishing. International Journal of Science Annals, 4(2), 5-7. https://doi.org/10.26697/ijsa.2021.2.1
O’Connor, S., & ChatGPT. (2022). Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Education in Practice, 66, 103537. https://doi.org/10.1016/j.nepr.2022.103537
Singh, R., & Sood, M. (2022). An introductory note on the pros and cons of using artificial intelligence for cybersecurity. In D. Gupta, A. Khanna, S. Bhattacharyya, A. E. Hassanien, S. Anand, A. Jaiswal (Eds.), International Conference on Innovative Computing and Communications: Vol. 471. Lecture Notes in Networks and Systems (pp. 337-348). Springer. https://doi.org/10.1007/978-981-19-2535-1_26
Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613, 620-621. https://doi.org/10.1038/d41586-023-00107-z
Watson, R., & Stiglic, G. (2023, February 23). Guest editorial: The challenge of AI chatbots for journal editors. https://publicationethics.org/news/challenge-ai-chatbots-journal-editors
Melnyk Yuriy Borysovych (Corresponding Author) – https://orcid.org/0000-0002-8527-4638;
Pypenko Iryna Sergiivna – https://orcid.org/0000-0001-5083-540X; Doctor of Philosophy in Economics, Associate Professor, Secretary of Board, Kharkiv Regional Public Organization “Culture of Health”; Co-Director, Scientific Research Institute KRPOCH; Director, Educational Center KRPOCH, Ukraine
|
APA
Melnyk, Yu. B., & Pypenko, I. S. (2023). The legitimacy of artificial intelligence and the role of ChatBots in scientific publications. International Journal of Science Annals, 6(1), 5–10. https://doi.org/10.26697/ijsa.2023.1.1
Harvard
Melnyk, Yu. B., & Pypenko, I. S. 2023. "The Legitimacy of Artificial Intelligence and the Role of ChatBots in Scientific Publications". International Journal of Science Annals, [online] 6(1), pp. 5–10. viewed 30 June 2023, https://culturehealth.org/ijsa_archive/ijsa.2023.1.1.pdfVancouver
Melnyk Yu. B., & Pypenko I. S. The Legitimacy of Artificial Intelligence and the Role of ChatBots in Scientific Publications. International Journal of Science Annals [Internet]. 2023 [cited 30 June 2023]; 6(1): 5–10. Available from: https://culturehealth.org/ijsa_archive/ijsa.2023.1.1.pdf https://doi.org/10.26697/ijsa.2023.1.1