AI Usage

AI Usage Policy

Ra’ah Journal recognizes that Artificial Intelligence (AI) technologies are increasingly used in academic writing, editing, and research support. While these technologies may assist authors, editors, and reviewers in improving efficiency and language quality, their use must remain within strict ethical boundaries to protect academic integrity, originality, and accountability. Therefore, the journal establishes the following policy to regulate the responsible and transparent use of AI tools in the publication process.

Ethical Guidelines for AI Use by Authors

Authors are permitted to use AI-assisted technologies during the preparation of manuscripts, provided that the use of such tools is conducted responsibly, transparently, and without diminishing human intellectual responsibility. The use of AI must not replace the critical thinking, scholarly interpretation, and academic accountability of the author.

Authorship and Responsibility

Artificial Intelligence tools cannot be recognized as authors under any circumstances. Authorship implies intellectual accountability, the ability to respond to questions about the work, and responsibility for ethical declarations such as originality, conflict of interest, and copyright ownership. Because AI systems cannot assume such responsibilities, they must not be listed as authors or co-authors in submitted manuscripts.

Human authors retain full responsibility for the entire content of the manuscript, including any text, analysis, or language assistance generated by AI systems. Authors must ensure that all information produced with the help of AI tools is accurate, properly cited, and free from bias or fabricated information. In particular, authors must carefully review AI-generated suggestions to prevent the inclusion of hallucinated references, inaccurate data, or misleading claims.

Disclosure Requirements

Transparency is a fundamental requirement in the use of AI technologies within academic writing. Authors must clearly disclose any significant use of AI tools that contributed to the preparation of the manuscript. This disclosure ensures that editors and reviewers understand the extent and purpose of AI assistance in the research process.

Information regarding AI use should be explicitly stated in the Research Method section of the manuscript. The disclosure must identify the name of the AI tool used, including the version or model when available, such as ChatGPT, Gemini, Perplexity, Grammarly Premium, or similar systems. In addition, the author must explain the specific purpose for which the tool was employed. For example, the disclosure may state that the AI tool was used for grammar correction in the final draft, language polishing, or summarizing preliminary notes. The disclosure should reflect the actual function of the AI tool without exaggeration or concealment.

Ethics of AI Content Use

Authors are required to verify all information generated or suggested by AI tools before including it in a manuscript. AI systems may produce inaccurate statements, fabricated references, or misleading interpretations. Therefore, authors must independently confirm the validity of all facts, references, and citations.

The use of AI tools does not exempt authors from compliance with the journal’s plagiarism policy. Even when AI is used to paraphrase or restructure sentences, the responsibility for ensuring originality remains entirely with the author. All manuscripts will still be subject to plagiarism screening according to the journal’s standard procedures.

If AI tools are used to generate images, diagrams, or other visual materials included in the manuscript, the authors must clearly disclose the origin of those materials. This disclosure should include the name of the AI tool used and the prompt or method applied to generate the visual content. Authors must also ensure that the generated material complies with applicable copyright and licensing regulations.

Guidelines for the Use of AI by Editors and Reviewers

The editorial and peer review process requires strict confidentiality, impartiality, and scholarly integrity. For this reason, the use of AI tools by editors and reviewers must be carefully restricted.

Editors and reviewers are strictly prohibited from uploading or inputting any part of a manuscript under review into publicly accessible AI systems that store data or use submitted information for model training. Submitting manuscript content into such systems could compromise the confidentiality of the peer review process and potentially violate the intellectual property rights of the author.

AI tools may be used in limited situations that do not involve the analysis or disclosure of manuscript content. For instance, editors and reviewers may use AI tools to assist with minor grammatical improvements in their review comments or to summarize publicly available information. However, these tools must not be used to evaluate the scholarly quality, originality, or methodological validity of a manuscript.

All editorial decisions, including acceptance, revision requests, or rejection, must be based exclusively on human scholarly judgment. The responsibility for evaluating manuscripts cannot be delegated to automated systems.

Review and Sanction Protocol

To uphold academic integrity, Ra’ah Journal implements a systematic screening process to monitor the use of AI in submitted manuscripts.

AI Screening

As part of the initial editorial assessment, the journal employs AI detection tools designed to support academic integrity. These tools may include, but are not limited to, the AI Writing Detection feature provided by Turnitin or other comparable systems widely used in academic publishing.

The journal sets a preliminary threshold of 20 percent AI-generated content relative to the total manuscript text. This threshold functions as an early indicator rather than a definitive judgment regarding the quality or legitimacy of the manuscript.

If the AI detection result indicates a proportion of AI-generated text below the threshold, the manuscript will proceed to the standard peer review process. However, if the detection result reaches or exceeds the 20 percent threshold, the editorial team will temporarily suspend the review process. In such cases, the editor will contact the corresponding author to request clarification regarding the use of AI tools. The author may be required to provide evidence of substantial human intellectual intervention, verification of references and data, and justification for the extent of AI assistance used during manuscript preparation.

AI detection results alone do not automatically determine rejection. The final decision remains based on the editor’s critical assessment of the originality of the research idea, the reliability of the data, the coherence of the argument, and the author’s transparency regarding AI usage. The editor reserves the right to reject a manuscript if credible evidence of fabricated data, invented references, or unverifiable claims generated by AI is discovered.

Sanctions for Misconduct

Failure to comply with this AI Usage Policy constitutes a serious violation of publication ethics. Misconduct may include, but is not limited to, failure to disclose the use of AI tools, fabrication of data or references produced by AI systems, or listing AI software as an author.

If such violations are identified, the journal may impose several sanctions depending on the severity of the misconduct. These sanctions may include immediate rejection of the manuscript during the editorial screening stage, retraction of the article if it has already been published, formal notification to the author’s affiliated institution or funding agency, and prohibition of the author from submitting manuscripts to the journal for a specified period.

All authors, editors, and reviewers involved in the publication process at Ra’ah Journal: Journal of Sociology and Religion are required to read, understand, and comply with this AI Usage Policy. Adherence to these guidelines ensures that the integration of emerging technologies within academic publishing remains aligned with the principles of transparency, accountability, and scholarly integrity.