====== Manuscript preparation ====== {{rss>https://pubmed.ncbi.nlm.nih.gov/rss/search/1X9MO_201KJDQKdx07Mqyv-6N3AiPpCXZOyC3TJIPN8kB6JIJZ/?limit=15&utm_campaign=pubmed-2&fc=20250321095648}} With the rapid proliferation of [[artificial intelligence tools]], important questions about their [[applicability]] to [[manuscript]] [[preparation]] have been raised. Schneider et al. explore the methodological challenges of detecting AI-generated content in [[neurosurgical publications]], using existing detection tools to highlight both the presence of AI content and the fundamental limitations of current detection approaches. They analyzed 100 [[random]]ly selected [[manuscript]]s published between [[2023]] and [[2024]] in high-impact [[neurosurgery journals]] using a two-tiered approach to identify potential AI-generated text. The text was classified as AI-generated if both robustly optimized bidirectional encoder representations from transformers pretraining approach (RoBERTa)-based AI classification tool yielded a positive classification and the text's perplexity score was less than 100. Chi-square tests were conducted to assess differences in the prevalence of AI-generated text across various manuscript sections, topics, and types. To eliminate bias introduced by the more structured nature of abstracts, a subgroup analysis was conducted that excluded abstracts as well. Approximately one in five (20%) manuscripts contained sections flagged as AI-generated. [[Abstract]]s and methods sections were disproportionately identified. After excluding abstracts, the association between section type and AI-generated content was no longer statistically significant. The findings highlight both the increasing integration of AI in manuscript preparation and a critical challenge in academic publishing as AI language models become increasingly sophisticated and traditional detection methods become less reliable. This suggests the need to shift focus from detection to [[transparency]], emphasizing the [[development]] of clear [[disclosure]] policies and [[ethical guidelines]] for AI use in [[academic writing]] ((Schneider DM, Mishra A, Gluski J, Shah H, Ward M, Brown ED, Sciubba DM, Lo SL. Prevalence of Artificial Intelligence-Generated Text in Neurosurgical Publications: Implications for Academic Integrity and Ethical Authorship. Cureus. 2025 Feb 16;17(2):e79086. doi: 10.7759/cureus.79086. PMID: 40109787; PMCID: PMC11920854.)). ---- Certainly! Here's a critical review of the study by Schneider et al., based on the summary you provided: --- Schneider et al.'s [[investigation]] into the presence of AI-generated [[content]] in [[neurosurgical literature]] represents a timely and important foray into the evolving relationship between [[artificial intelligence]] and [[academic publishing]]. With [[language model]]s such as [[ChatGPT]] gaining traction in [[scientific writing]], the study attempts to quantify their influence while simultaneously critiquing the [[reliability]] of existing detection [[method]]s. The dual focus—detection and critique—adds depth to what could otherwise have been a mere technical audit. One of the study's strengths lies in its structured, two-tiered methodology combining a RoBERTa-based classifier with perplexity scoring. This hybrid approach increases the robustness of detection compared to relying on a single metric. However, even this approach is not without limitations. Perplexity scores, for instance, are a crude proxy for "human likeness" and are highly sensitive to stylistic variance, making them susceptible to false positives or negatives. Moreover, the reliance on RoBERTa—a model trained on specific corpora—raises questions about generalizability, especially when applied to highly specialized scientific texts. The sample size of 100 manuscripts provides a decent snapshot, though it may be insufficient to draw broader conclusions across the neurosurgical literature. Nonetheless, the finding that 20% of manuscripts had sections flagged as AI-generated is striking and underscores the silent integration of generative models into scientific workflows. Particularly noteworthy is the overrepresentation of AI-flagged content in the abstracts and methods sections—areas that are often templated and may be more amenable to automation. The loss of statistical significance when abstracts are excluded suggests that the structured nature of scientific abstracts may confound detection tools, which raises further concerns about the specificity of current algorithms. Importantly, Schneider et al. do not simply stop at reporting prevalence; they contextualize their findings within the broader ethical and methodological landscape. As the authors rightly point out, the increasing sophistication of AI makes traditional detection tools obsolete at an accelerating pace. This positions their study as a call to action: rather than investing in an arms race of detection, the scientific community may be better served by implementing clear guidelines for disclosure and usage. Still, the paper would benefit from a deeper exploration of what constitutes ethical AI use in manuscript preparation. For example, is it acceptable to use AI for grammar correction, but not for drafting sections? Should journals require disclosure only if AI tools generate original text, or also for editing suggestions? Addressing these nuances would enrich the ethical argument the authors begin to sketch. In conclusion, Schneider et al. contribute a valuable piece of early scholarship on the intersection of generative AI and scientific authorship. While their methodology is not without its shortcomings, the study successfully raises awareness about the blurred boundaries between human and machine contributions in academic writing. As AI becomes an invisible co-author in more manuscripts, the need for transparency, rather than detection, emerges as the more sustainable and ethically grounded approach. --- Would you like a more concise version, or one adapted for a particular context (e.g., a journal club discussion or editorial commentary)? ---- It is important to recognize the potential benefits of using [[AI language models]] in genuine scientific writing and research, such as manuscript preparation and language editing ((Májovský M, Černý M, Kasal M, Komarc M, Netuka D. Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened. J Med Internet Res. 2023 May 31;25:e46924. doi: 10.2196/46924. PMID: 37256685.)) ---- Preparing a [[manuscript]] for [[publication]] can be a complex and time-consuming process. Here are some general steps that you can follow: Plan your manuscript: Before you start [[writing]], plan your manuscript by outlining your ideas, organizing your thoughts, and determining the [[scope]] of your article. Write your manuscript: Once you have a clear plan, start writing your manuscript. Be sure to follow the [[guidelines]] of the target [[journal]], such as word limits, [[formatting]], and [[citation style]]. Revise your manuscript: After you have finished your first draft, revise your manuscript several times to improve its clarity, organization, and coherence. Get feedback from colleagues, mentors, or professional editors. Format your manuscript: Once your manuscript is finalized, format it according to the journal's specifications. Check that all figures and tables are correctly labeled and that the references are complete and accurate. Submit your manuscript: Finally, submit your manuscript to the journal. Make sure to follow the submission guidelines carefully and provide all the required information, including a cover letter and any supporting documents. Respond to reviewer comments: After the manuscript has been reviewed, respond to the comments and revise the manuscript as necessary. Be sure to address all the concerns of the reviewers. Proofread and approve your final manuscript: Before the manuscript is published, proofread it carefully and approve the final version. Remember that the process of manuscript preparation can be time-consuming, but it is essential to ensure that your work is communicated effectively and accurately to the scientific community.