Fabricating research in the scientific community harms one’s credibility and weakens honest authors. We show how an AI-based language model chatbot can be used to fabricate research. To measure the accuracy in recognising faked works, human detection will be compared to AI detection. The risks of using AI-generated research works will be highlighted, as will the motives for faking research.
The improper fabrication of research works has major ramifications for the fabricator, the fabricated, and the scientific community, which relies on the integrity of these publications to make educated decisions regarding developments in sociology, economics, politics, and medicine, among other fields. Journal editors must be attentive in discovering fabricated works in order to avoid their publication; nevertheless, the search algorithms used to detect fabrication differ from those used to detect plagiarism. There are dozens of online plagiarism checkers, and many publications have built-in algorithms that detect plagiarism nearly instantly.
Detecting fraud, on the other hand, is difficult because the work is entirely made up and faked, rather than plagiarised from other authors. Researchers face a hurdle when it comes to establishing whether a piece of work was manufactured and plagiarised using an AI-based system.
AI technology’s exponential advancement has enhanced productivity in a variety of areas and serves as a resource to expedite jobs that are unneeded or can be eliminated. These technologies can produce research works that are undetectable by human judgement or automatic plagiarism/fabrication tools.
ChatGPT is a new, robust language model chatbot AI that was recently published towards the end of November 2022. Though there are other AI chatbots available, ChatGPT proved to be revolutionary for many, attracting 1 million new users in less than a week. This AI chatbot generates high-quality words that easily avoid plagiarism detection and may be used to quickly create research papers.