ANNOUNCEMENTS
Submission for the Journal in 2025 is open now More
Acceptance of Application Forms for Participation in International Competition
ICMHDS-2025 will be open soon 
More
Acceptance of Application Forms for Participation in International Academic Conferences 
PPPMSF-2025 & CIES-2025 are open now More
Should We Expect Ethics from Artificial Intelligence: The Case of ChatGPT Text Generation
Melnyk Y. B. 1,2
 
1 Kharkiv Regional Public Organization “Culture of Health”, Ukraine
2 Scientific Research Institute KRPOCH, Ukraine
 

 

Abstract

Background and Aim of Study: Implementing artificial intelligence (AI) in various areas of human activity is an avalanche-like process. This situation has raised questions about the feasibility and regulation of AI use that require justification, particularly in the context of scientific research.
The aim of the study: to identify the extent to which AI-based chatbots can meet ethical standards when analysing academic publications, given their current level of development.
Material and Methods: The present study employed various theoretical methods, including analysis, synthesis, comparison, and generalisation of experimental studies and published data, to evaluate ChatGPT’s capacity to adhere to fundamental ethical principles when analysing academic publications.
Results: The present study characterised the possibilities of using AI for academic research and publication preparation. The paper analysed a case of text generation by ChatGPT and found that the information generated by the chatbot was falsified. This fact and other similar data described in publications indicate that ChatGPT has a policy to generate information on request at any cost. This completely disregards the reliability of such information, the copyright of its owners and the basic ethical standards for analysing academic publications established within the scientific community.
Conclusions: It is becoming increasingly clear that AI and the various tools based on it will evolve rapidly and have qualities more and more similar to human intelligence. We believe the main danger lies in losing control of this AI development process. The rapid development of negative qualities in AI, such as selfishness, deceitfulness and aggressiveness, which were previously thought to be unique to humans, may in the future generate in AI the idea of achieving superiority over humans. In this context, lying and violating ethical standards when analysing academic publications seem like innocent, childish pranks at the early stages of AI development. The results are important in drawing the attention of developers, scientists, and the general public to the problems of AI and developing specific standards, norms, and rules for its use in various fields

 
 
 

Keywords

ethical standards, artificial intelligence, AI-based chatbots, ChatGPT, machine learning systems, falsification of research and publications

 
 
  

References

Armstrong, K. (2023, May 28). ChatGPT: US lawyer admits using AI for case research. https://www.bbc.com/news/world-us-canada-65735769 

Bohannon, M. (2023, Jun 08). Lawyer used ChatGPT in court – and cited fake cases. A judge is considering sanctions. https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/ 

Chechitelli, A. (2023, January 13). Sneak preview of Turnitin’s AI writing and ChatGPT detection capability. Turnitin. https://www.turnitin.com/blog/sneak-preview-of-turnitins-ai-writing-and-chatgpt-detection-capability 

COPE. (2023a, February 13). Authorship and AI tools. COPE position statement. https://publicationethics.org/cope-position-statements/ai-author 

COPE. (2023b, February 23). Artificial intelligence and authorship. https://publicationethics.org/news/artificial-intelligence-and-authorship 

Flanagin, A., Bibbins-Domingo, K., Berkwits, M., & Christiansen, S. L. (2023). Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA, 329(8), 637–639. https://doi.org/10.1001/jama.2023.1344 

Hammerschmidt, T. (2025). Navigating the Nexus of ethical standards and moral values. Ethics and Information Technology, 27(17). https://doi.org/10.1007/s10676-025-09826-5 

Hendrycks, D. (2023). Natural selection favors AIs over humans. arXiv. https://doi.org/10.48550/arXiv.2303.16200 

Littell, W. N., & Peterson, B. L. (2024). AI-powered chatbots to simulate executive interactions for students performing stakeholder analysis. Communication Teacher, 39(1), 18–25. https://doi.org/10.1080/17404622.2024.2405188 

Melnyk, Yu. B., & Pypenko, I. (2024). Ethical use of artificial intelligence technology in medical research and publications. Acta Phlebologica, 25(2), 109–110. https://doi.org/10.23736/S1593-232X.23.00607-0 

Melnyk, Yu. B., Pypenko, I. S., & Maslov, Yu. V. (2020). COVID-19 pandemic as a factor revolutionizing the industry of higher education. Rupkatha Journal on Interdisciplinary Studies in Humanities, 12(5). https://doi.org/10.21659/rupkatha.v12n5.rioc1s19n2 

Melnyk, Yu. B., & Pypenko, I. S. (2024). Artificial intelligence as a factor revolutionizing higher education. International Journal of Science Annals, 7(1), 5–13. https://doi.org/10.26697/ijsa.2024.1.2 

Melnyk, Yu. B., & Pypenko, I. S. (2023). The legitimacy of artificial intelligence and the role of ChatBots in scientific publications. International Journal of Science Annals, 6(1), 5–10. https://doi.org/10.26697/ijsa.2023.1.1 

Melnyk, Yu. B., Stadnik, A. V., & Pypenko, I. S. (2020). Resistance to post-traumatic stress reactions of vulnerable groups engaged in pandemic liquidation. International Journal of Science Annals, 3(1), 35–44. https://doi.org/10.26697/ijsa.2020.1.5 

Mihalache, A., Huang, R. S., Popovic, M. M., Patil, N. S., Pandya, B. U., Shor, R., ... Muni, R. H. (2024). Accuracy of an artificial intelligence chatbot’s interpretation of clinical ophthalmic images. JAMA Ophthalmology, 142(4), 321–326. https://doi.org/10.1001/jamaophthalmol.2024.0017 

Mykhaylyshyn, U. B., Stadnik, A. V., Melnyk, Yu. B., Vveinhardt, J., Oliveira, M. S., & Pypenko, I. S. (2024). Psychological stress among university students in wartime: A longitudinal study. International Journal of Science Annals, 7(1), 27–40. https://doi.org/10.26697/ijsa.2024.1.6  https://dspace.uzhnu.edu.ua/jspui/bitstream/lib/63300/1/ijsa.2024.1.6.pdf 

Naik, D., Naik, I., & Naik, N. (2024). Imperfectly perfect AI chatbots: Limitations of generative AI, large language models and large multimodal models. In Naik, N., Jenkins, P., Prajapat, S., Grace, P. (Eds.), Lecture Notes in Networks and Systems: Vol. 884. Computing, Communication, Cybersecurity and AI conference proceedings (pp. 43–66). Springer Cham. https://doi.org/10.1007/978-3-031-74443-3_3 

Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5). https://doi.org/10.1016/j.patter.2024.100988 

Pypenko, I. S., Maslov, Yu. V., & Melnyk, Yu. B. (2020). The impact of social distancing measures on higher education stakeholders. International Journal of Science Annals, 3(2), 9–14. https://doi.org/10.26697/ijsa.2020.2.2 

Pypenko, I. S. (2024). Benefits and challenges of using artificial intelligence by stakeholders in higher education. International Journal of Science Annals, 7(2), 28–33. https://doi.org/10.26697/ijsa.2024.2.7 

Pypenko, I. S. (2023). Human and artificial intelligence interaction. International Journal of Science Annals, 6(2), 54–56. https://doi.org/10.26697/ijsa.2023.2.7 

Salloum, S.A. (2024). AI perils in education: Exploring ethical concerns. In Al-Marzouqi, A., Salloum, S.A., Al-Saidat, M., Aburayya, A., & Gupta, B. (Eds.), Studies in Big Data: Vol. 144. Artificial Intelligence in Education: The power and dangers of ChatGPT in the classroom (pp. 669-675). Springer Cham. https://doi.org/10.1007/978-3-031-52280-2_43 

Tian, W., Ge, J., Zhao, Y., & Zheng, X. (2024). AI Chatbots in Chinese higher education: Adoption, perception, and influence among graduate students – An integrated analysis utilizing UTAUT and ECM models. Frontiers in Psychology, 15, Article 1268549. https://doi.org/10.3389/fpsyg.2024.1268549 

Yigci, D., Eryilmaz, M., Yetisen, A. K., Tasoglu, S., & Ozcan, A. (2025). Large language model‐based chatbots in higher education. Advanced Intelligent Systems, 7(3), Article 2400429. https://doi.org/10.1002/aisy.202400429 

Zielinski, C., Winker, M. A., Aggarwal, R., Ferris, L. E., Heinemann, M., Lapeña, J. F., Pai, S. A., Ing, E., Citrome, L., Alam, M., Voight, M., & Habibzadeh, F. (2023, May 31). Chatbots, generative AI, and scholarly manuscripts. WAME. https://wame.org/page3.php?id=106 

 

 

 

  
 

 

Information about the author:

Melnyk Yuriy Borysovychhttps://orcid.org/0000-0002-8527-4638; This email address is being protected from spambots. You need JavaScript enabled to view it.; Doctor of Philosophy in Pedagogy, Affiliated Associate Professor; Chairman of Board, Kharkiv Regional Public Organization “Culture of Health” (KRPOCH); Director, Scientific Research Institute KRPOCH, Ukraine.

 
 
 
Cite this article as:

APA


Melnyk, Y. B.  (2025). Should we expect ethics from artificial intelligence: The case of ChatGPT text generation. International Journal of Science Annals, 8(1), 5–11. https://doi.org/10.26697/ijsa.2025.1.5 

Harvard


Melnyk, Y. B. "Should we expect ethics from artificial intelligence: The case of ChatGPT text generation." International Journal of Science Annals, [online] 8(1), pp. 5–11. viewed 30 June 2025, https://culturehealth.org/ijsa_archive/ijsa.2025.1.5.pdf

Vancouver


Melnyk Y. B. Should we expect ethics from artificial intelligence: The case of ChatGPT text generation. International Journal of Science Annals [Internet]. 2025 [cited 30 June 2025]; 8(1): 5–11. Available from: https://culturehealth.org/ijsa_archive/ijsa.2025.1.5.pdf https://doi.org/10.26697/ijsa.2025.1.5

  © 2018 – 2025 International Journal of Science Annals
DOI: https://doi.org/10.26697/ijsa