ChatGPT
Using generative AI tools like ChatGPT for writing scientific articles isn't feasible as more than half of the references used by the AI tool aren't accurate, resulting in more factual errors that require more extensive fact-checking than human-written articles, revealed a study. Pexels

Generative AI tools like ChatGPT created by OpenAI are affecting many avenues of life product reviews, journalism, etc. The latest victim is scientific research.

A new study conducted by the Indiana University School of Medicine has revealed the extent of the influence of artificial intelligence on scientific research.

Researchers from the university wanted to explore the role of ChatGPT in writing scientific research papers, and peer review analysis ever since OpenAI launched the generative AI tool in November 2022, leading to its huge impact in the graphics, art and writing profession. This led to many Hollywood studios replacing writers and artists with AI causing the SAG-AFTRA strikes for better pay.

Amidst this, the Indiana University School of Medicine analysed how ChatGPT could be effectively used to write scientific articles and how differently they can be used.

Speaking about the matter, the Voice Chair of Research at the Indiana University School of Medicine, Melissa Kacena said: "Right now, many journals do not want people to use ChatGPT to write their articles, but a lot of people are still trying to use it. We wanted to study whether ChatGPT can write a scientific article and what are the different ways you could successfully use it."

To study ChatGPT's ways of writing scientific research papers, the researchers used three different topics – COVID-19 and bone health, fractures and the nervous system, Alzheimer's and bone health. They asked the $20 per month ChatGPT version to write scientific articles on the three topics.

The researchers further analysed ChatGPT by employing a mixture of approaches. One involved all human writing, another used ChatGPT writing and a third one had a combination of both human and AI writing.

The result of the study was a compilation of 12 articles which is available in the special edition of Current Osteoporosis Reports.

Kacena explained how they compared the results of different approaches by collecting data regarding "how much time it takes for this human method and how much time it takes for ChatGPT to write and then for faculty to edit the different articles".

The conventional standard for peer reviewing an article is to "do a literature search, write an outline, start writing, and then faculty members revise and edit the draft", Kacena said.

The researchers found that generated scientific articles had nearly 70 per cent wrong references. However, the human and AI combined research papers showcased more plagiarism, especially when they were given more references.

The study found that generative AI tools like ChatGPT speed up the writing process as it decreases the time spent to write articles but they require more fact-checking than human-AI combined written articles.

Scientific barriers to using Generative AI tools like ChatGPT

The researchers also identified barriers to using ChatGPT for scientific research writing as the language used by the AI tool isn't suitable for scientific writing. Despite giving prompts that used higher levels of scientific writing, the words and phrases generated by ChatGPT are not according to research standards.

One of the authors of the study, Lilian Plotkin, a professor of physiology at the Indiana University School of Medicine termed the results "scary" as it meant that the writing was repetitive with incorrect references and wrong information.

This comes at a time when many authors like Game of Thrones writer George RR Martin have sued OpenAI for mass-scale systematic theft for using their work to train AI.

Another researcher, Jill Fehrenbacher, a professor of pharmacology at the same University revealed that the problem of this AI bias will persist as many non-native English speakers will use ChatGPT for writing research papers despite prohibition from journals.

Fehrenbacher said that people may write everything themselves but run it in ChatGPT to fix grammatical errors or to help with their writing, hence the necessity of using it appropriately.

"We hope to provide a guide for the scientific community so that if people are going to use it, here are some tips and advice," said Fehrenbacher regarding their research.

"I think it's here to stay, but we need to understand how we can use it in an appropriate manner that won't compromise someone's reputation or spread misinformation," Kacena added.