Scientific fraud
see also Research misconduct.
In scientific research, AI can enhance the quality and efficiency of data analysis and publication. However, AI has also opened up the possibility of generating high-quality fraudulent papers that are difficult to detect, raising important questions about the integrity of scientific research and the trustworthiness of published papers.
The aim of this study was to investigate the capabilities of current AI language models in generating high-quality fraudulent medical articles. Májovský et al. hypothesized that modern AI models can create highly convincing fraudulent papers that can easily deceive readers and even experienced researchers.
A proof-of-concept study used ChatGPT (Chat Generative Pre-trained Transformer) powered by the GPT-3 (Generative Pre-trained Transformer 3) language model to generate a fraudulent scientific article related to neurosurgery. GPT-3 is a large language model developed by OpenAI that uses deep learning algorithms to generate human-like text in response to prompts given by users. The model was trained on a massive corpus of text from the internet and is capable of generating high-quality text in a variety of languages and on various topics. The authors posed questions and prompts to the model and refined them iteratively as the model generated the responses. The goal was to create a completely fabricated article including the abstract, introduction, material and methods, discussion, references, charts, etc. Once the article was generated, it was reviewed for accuracy and coherence by experts in the fields of neurosurgery, psychiatry, and statistics and compared to existing similar articles.
The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as an introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references.
The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection. They highlight the need for increased vigilance and better detection methods to combat the potential misuse of AI in scientific research. At the same time, it is important to recognize the potential benefits of using AI language models in genuine scientific writing and research, such as manuscript preparation and language editing 1).
Scientists have the responsibility of judging what is best for the patient and the optimal conditions for the conduct of a study. All physicians should ensure that research they participate in is ethically conducted. Every clinician should learn and receive training in the responsible conduct of research and publication, and each project must be reviewed by an institutional review committee. Scientific misconduct is defined as any practice that deviates from those accepted by the scientific community and ultimately damages the integrity of the research process. “Sloppy Research” and “Scientific Fraud” include activities which can violate science, records and publication. Sloppy research is due to absence of appropriate training in research discipline and methodologies. In contrast, scientific fraud is defined as deliberate action during application, performance of research, and publication. It includes piracy, plagiarism and fraud. Research institutions should adopt rules and regulations to respond to allegations, start investigational operations and perform appropriate sanctions 2).
Despite the increasing awareness of scientific fraud, no attempt has been made to assess its prevalence in neurosurgery.
The aim a review of Wang et al. was to assess the chronological trend, reasons, research type/design, and country of origin of retracted neurosurgical publications.
Two independent reviewers searched the EMBASE and MEDLINE databases using neurosurgical keywords for retracted articles from 1995 to 2016. Archives of retracted articles (retractionwatch.com) and the independent websites of neurosurgical journals were also searched. Data including the journal, impact factor, reason for retraction, country of origin, and citations were extracted.
A total of 98 studies were included for data extraction. Journal impact factor ranged from 0.57 to 35.03. Most studies (61) were retracted within the last 5 years. The most common reason for retraction was because of a duplicated publication found elsewhere (26), followed closely by plagiarism (22), or presenting fraudulent data (14). Other reasons included scientific errors/mistakes, author misattribution, and compromised peer review. Articles originated from several different countries and some were widely cited.
Retractions of neurosurgical publications are increasing significantly, mostly due to issues of academic integrity, including duplicate publishing and plagiarism. Implementation of more transparent data sharing repositories, thorough screening of data prior to manuscript submission, as well as additional educational programs for new researchers may help mitigate these issues moving forward 3).
Unclassified
4: Lewis A, Caplan A. Response to a trial on reversal of Death by Neurologic Criteria. Crit Care. 2016 Nov 22;20(1):377. PubMed PMID: 27871305; PubMed Central PMCID: PMC5118884.
5: Jea A. Editorial: The positives of a negative study. J Neurosurg Pediatr. 2016 Aug;18(2):146-7. doi: 10.3171/2015.12.PEDS15633. Epub 2016 May 3. PubMed PMID: 27137784.
6: Chambless LB, Kistka HM. Response. J Neurosurg. 2016 Jan;124(1):191-2. PubMed PMID: 27110610.
7: Kistka HM, Nayeri A, Wang L, Dow J, Chandrasekhar R, Chambless LB. Publication misrepresentation among neurosurgery residency applicants: an increasing problem. J Neurosurg. 2016 Jan;124(1):193-8. doi: 10.3171/2014.12.JNS141990. Epub 2015 Jul 24. PubMed PMID: 26207605.
8: Heros RC. Editorial: Misrepresentation among neurosurgery residency applicants. J Neurosurg. 2016 Jan;124(1):190-1. doi: 10.3171/2015.1.JNS142803. Epub 2015 Jul 24. PubMed PMID: 26207603.
9: Pandya SK. Indian Council of Medical Research: then and now. Indian J Med Ethics. 2013 Jul-Sep;10(3):159-63. PubMed PMID: 23912728.
10: Sbeih I. Do we need a neurosurgical Interpol? Surg Neurol. 2009 Dec;72(6):628-9. doi: 10.1016/j.surneu.2009.07.010. Epub 2009 Oct 9. PubMed PMID: 19818479.
11: Schott GD. The reference: more than a buttress of the scientific edifice. J R Soc Med. 2003 Apr;96(4):191-3. Review. PubMed PMID: 12668711; PubMed Central PMCID: PMC539452.
12: Guc MO. Ethics in publication. Acta Neurochir Suppl. 2002;83:101-4. PubMed PMID: 12442628.
13: Kansu E, Ruacan S. Research ethics and scientific misconduct in biomedical research. Acta Neurochir Suppl. 2002;83:11-5. PubMed PMID: 12442615.
14: Maurice-Williams RS. The notes in the cupboard: the question of intellectual honesty in neurosurgery. Br J Neurosurg. 1997 Aug;11(4):277-9. PubMed PMID: 9337923.
15: Annas GJ. Mengele's birthmark: the Nuremberg Code in United States courts. J Contemp Health Law Policy. 1991 Spring;7:17-45. PubMed PMID: 11645690.