Artificial Intelligence (AI) Policy
JENRS recognises the growing use of artificial intelligence (AI), including large language models (LLMs) and other generative AI tools, in research and scholarly writing. These tools can support authors and editors by helping with idea generation, literature discovery, language improvement, data analysis support, and clearer presentation of work.
At the same time, AI tools do not replace human judgement, originality, and accountability. JENRS expects all contributors to use AI responsibly and ethically. Human authors, reviewers, and editors remain fully responsible for the integrity, accuracy, and originality of their work.
This policy applies to all submissions, peer review reports, and editorial decisions.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Acceptable use: AI assistance
Authors may use assistive AI tools that improve clarity without creating new scholarly content. Examples include:
- grammar and spelling correction
- improving readability and style
- formatting support
- suggestions to improve structure (without generating core content)
Disclosure is not required for these limited uses.
Even when disclosure is not required, authors are responsible for ensuring the final manuscript is accurate, original, and meets the journal’s standards.
Generative AI use
Use of generative AI that creates, rewrites, or materially reshapes content must be disclosed. This includes using AI to generate or significantly modify:
- text (beyond minor editing)
- abstracts, introductions, or conclusions
- literature reviews or summaries of studies
- tables, figures, images, or graphical abstracts
- computer code, models, or analysis scripts (where relevant)
- citations or reference lists
Disclosure must include:
- the tool/model used (name and version if available)
- what it was used for
- whether any AI-generated text, figures, code, or other outputs were directly included
Where to disclose:
- in the Methods section (if it affected methods, analysis, or code), or
- in the Acknowledgements section, or
- in a short “AI Use Statement” (recommended)
Source and citation rules
- Authors must cite original sources. AI tools must not be cited as primary sources for scholarly claims.
- Any references suggested by AI must be checked carefully. Do not include citations that you have not read and verified.
Author responsibilities when using AI
Authors must:
- verify all facts, interpretations, and citations
- correct errors, bias, fabricated content, and misleading statements
- ensure the work is original and does not plagiarise published or copyrighted material
- ensure AI use does not breach confidentiality, permissions, data protection, or third-party rights
- ensure any AI-generated images or figures comply with ethical and copyright expectations
Authorship
AI tools (including ChatGPT and other generative AI systems) cannot be listed as authors. Authorship requires accountability and responsibility that AI cannot provide.
Editorial action for undisclosed or inappropriate use
JENRS will not reject a submission solely because authors disclosed appropriate generative AI use.
However, if the journal becomes aware of undisclosed or inappropriate use of generative AI, JENRS may request clarification or revision, reject the manuscript at any stage, and take further actions consistent with publication ethics guidance.
Inappropriate use includes (but is not limited to):
- plagiarism or close copying of existing works
- fabricated data, results, citations, or sources
- misleading statements about how work was produced
- submitting AI-generated content without author verification and accountability
Suggested AI Use Statement (template for authors)
Authors may include the following statement (edit as needed):
AI Use Statement:
The authors used [tool/model name] for [specific purpose]. The authors reviewed and verified the output and take full responsibility for the content of the manuscript. No confidential or copyrighted material beyond what is included in the manuscript was provided to the tool.
Confidentiality and manuscript security
Peer review content is confidential. Reviewers must not paste or upload manuscripts (or substantial parts of them), peer review reports, or author identities into public or third-party AI tools if doing so could store or reuse the content, or expose it to others.
If a reviewer is unsure whether a tool is safe for confidential content, it must not be used.
Limited use allowed: writing support
Reviewers may use AI tools to improve the language and clarity of their review comments, as long as:
- they do not share confidential manuscript content with the tool in a risky way.
- the reviewer remains fully responsible for the review’s content, accuracy, and fairness.
Prohibited use: AI-generated peer review reports
Reviewers must not use generative AI to produce peer review reports or make recommendations in place of their own expert assessment.
Reviews that appear to be inappropriately generated by AI may be excluded from editorial consideration, and the reviewer may not be invited to review for JENRS in the future.
Reporting concerns
If a reviewer suspects undisclosed or inappropriate AI use in a submission, they should inform the associate editor and describe the reasons for concern.
Journal Editors have overall responsibility for what is published in JENRS and must ensure editorial decisions are fair, accountable, and based on expert judgement.
Confidentiality and data protection
Editors must protect the confidentiality of submitted manuscripts and peer review materials.
Editors must not upload, paste, or share any unpublished manuscript content, peer review reports, author identities, or editorial correspondence into public or third-party AI tools if doing so could store the content, use it to train a system, expose it to other users, or breach confidentiality, copyright, or data protection obligations.
If an editor is unsure whether an AI tool is safe for confidential content, it must not be used.
Acceptable uses of AI for editorial support
Editors may use AI tools only as support for tasks that do not replace editorial judgement, such as:
- identifying potential reviewers based on keywords and subject fit.
- helping improve clarity of editor-written text (for example, a message the editor has already drafted).
- summarising non-confidential information that is already public (for example, published articles, public abstracts, and public author webpages).
Editors remain responsible for verifying any AI-assisted output before using it.
Prohibited uses of Generative AI
Editors must not use generative AI tools to:
- write or generate decision letters (accept, revise, reject).
- produce editorial assessments of unpublished manuscripts.
- create summaries of unpublished research for decision-making.
- replace editor judgement on novelty, validity, ethics, or suitability for the journal.
- generate or alter peer review content in a way that misrepresents reviewer opinions.
Editorial decisions must be made by editors, not by AI systems.
Managing AI disclosure in manuscripts
Editors should check that authors disclose generative AI use when required by this policy.
When disclosure is provided, editors may consider:
- whether the use was appropriate for the research and reporting.
- whether methods and results remain transparent and verifiable.
- whether citations and claims appear accurate and traceable to original sources.
Disclosure alone is not a reason to reject a manuscript.
Handling suspected undisclosed or inappropriate AI use
If an editor suspects undisclosed or inappropriate AI use (for example, fabricated references, invented data, inconsistent methods, or AI-generated text presented as original scholarship), the editor may:
- request clarification and supporting materials from the authors (for example, data, code, or primary sources)
- require revision to correct errors and add disclosure
- reject the manuscript if integrity concerns remain
If concerns relate to a published article, JENRS will review the matter under its publication ethics process and may issue corrections, expressions of concern, or retractions where appropriate.
Record-keeping and transparency
Editors should document major decisions relating to AI concerns (for example, author queries, responses received, and the rationale for the final decision) to ensure a clear audit trail.