Various journal/publishing guidelines and policies surround ChatGPT and similar AI tools. This is not a comprehensive list; always read the guidelines of your journals of interest.
Cambridge University Press - Authorship and Contributorship"AI use must be declared and clearly explained in publications such as research papers, just as we expect scholars to do with other software, tools, and methodologies. AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge."
Elsevier Duties of Authors"Where authors use AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work and not to replace key researcher tasks such as producing scientific insights, analyzing and interpreting data, or drawing scientific conclusions. Applying the technology should be done with human oversight and control, and authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. The authors are ultimately responsible and accountable for the contents of the work."
Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and ChatbotsJAMA and the JAMA Network journals released guidance on the responsible use of these tools by authors and researchers in scholarly publishing.1,2 These policies preclude the inclusion of nonhuman AI tools as authors and require the transparent reporting of the use of such tools in preparing manuscripts and other content and when used in research submitted for publication. In addition, submission and publication of clinical images created by AI tools is discouraged, unless part of formal research design or methods. In all such cases, authors must take responsibility for the integrity of the content generated by these models and tools.
JAMA Network Instructions for Authors"Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship. If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or Methods section if this is part of formal research design or methods. See also Reproduced and Re-created Material and Image Integrity."
Sage Publishing Publishing Policies"Authors must clearly indicate the use of language models in the manuscript, including which model was used and for what purpose. Please use the methods or acknowledgments section as appropriate. Please note that AI bots such as ChatGPT should not be listed as authors in your submission."
Taylor & Francis - Defining Authorship in Your Research Paper"Authors must be aware that using AI-based tools and technologies for article content generation, e.g., large language models (LLMs), generative AI, and chatbots (e.g., ChatGPT), is not in line with our authorship criteria. All authors are wholly responsible for the originality, validity, and integrity of the content of their submissions. Therefore, LLMs and other similar types of tools do not meet the criteria for authorship."
Thieme Journal Policies"AI tools such as ChatGPT can make scholarly contributions to papers. The use of generative AI tools should be properly documented in the Acknowledgements or Material and Methods sections. AI tools should not be listed as authors, as they do not fulfill all criteria for authorship: they cannot take responsibility for the integrity and the content of a paper, and they cannot take on legal responsibility."
Wiley Best Practice Guidelines on Research Integrity and Publishing Ethics"Artificial Intelligence Generated Content (AIGC) tools—such as ChatGPT and others based on large language models (LLMs)—cannot be considered capable of initiating an original piece of research without direction by human authors. They also cannot be accountable for a published work or for research design, which is a generally held requirement of authorship (as discussed in the previous section), nor do they have legal standing or the ability to hold or assign copyright."
NatureLarge Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.
SpringerLarge Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria