AI Policy
The journal recognizes the growing influence of artificial intelligence (AI) technologies, including large language models (LLMs) and other generative tools, on research activities and scholarly publishing. This policy is informed by international best practices in publication ethics and responsible research conduct, and reflects guidance issued by organizations such as COPE and WAME, as well as policies adopted by leading international publishers, including Elsevier. The objective of this policy is to ensure the ethical, transparent, and responsible use of AI throughout the research, review, and editorial processes.
The use of LLMs and generative AI tools (for example, ChatGPT or similar systems) is not prohibited. However, AI systems do not qualify as authors and cannot assume responsibility for scholarly work. The use of AI does not diminish or transfer the authors’ responsibility for the accuracy, originality, integrity, or accountability of the content. Any use of AI tools must be clearly and transparently disclosed.
For authors
Artificial intelligence tools must not be listed as authors under any circumstances.
If AI tools are used to assist with writing, editing, data analysis, or the generation of tables, figures, or other content, this use must be explicitly disclosed in the manuscript, typically in the Methods or Acknowledgments section. The disclosure should include the name of the tool, the version (if applicable), the date of use, and a general description of how it was used.
Authors remain fully responsible for the validity, originality, and proper citation of all content, including any material generated or assisted by AI tools.
The use of AI to fabricate data, results, images, or references, or to manipulate research outcomes, is strictly prohibited and constitutes scientific misconduct.
Improper or undisclosed use of AI may result in editorial actions, including rejection of the manuscript, correction, retraction, or other sanctions in accordance with publication ethics guidelines.
For reviewers
Reviewers must not upload or disclose manuscript content to publicly accessible or external AI tools that store, reuse, or train on user-provided data, as this would violate confidentiality obligations.
If reviewers use AI tools to assist with language or structure in drafting their review reports, this must be disclosed to the editors, and no confidential or identifying manuscript information may be shared with such tools.
Reviewers remain fully responsible for the content of their reports, including the accuracy, fairness, and appropriateness of any AI-assisted text.
For editors
Editors are responsible for ensuring that authors and reviewers are informed about this AI policy.
Any use of AI tools by editors or editorial staff for administrative tasks, correspondence, or decision support must be transparent, appropriately documented, and must not replace human judgment in editorial decision-making.
The editorial office should employ appropriate measures and tools to identify potential misuse of AI, including AI-generated content, plagiarism, or manipulation of the scholarly record.
Confidentiality, independence, and integrity of the editorial process must be maintained at all times.
Final remarks
This journal aligns with internationally recognized publisher policies, including Elsevier’s guidance on the ethical, transparent, and responsible use of generative AI in scientific publishing. In cases of uncertainty, dispute, or alleged misuse of AI technologies, the journal will apply established publication ethics standards and follow the procedures recommended by COPE and WAME.
Further reference: