- Make a Submission
- Focus And Scope
- Editorial Team
- Reviewers
- Author Guidelines
- Reviewer Guidelines
- Article Template
- Article Processing Charge (APC)
- Publication Process
- Publication Ethics
- Plagiarism Policy
- Recommended Tools
- Generative AI Policies
- Open Access Statement
- Copyright And License
- Archiving Policy
- Crossmark Policy
- Indexing
- Scopus Citedness
- Contact Us
- Suport
Generative AI Policies
Generative AI Policies
Journal Economic Business Innovation (JEBI) recognizes that generative artificial intelligence and AI-assisted technologies may support scholarly communication when used responsibly, transparently, and ethically. This policy provides guidance for authors, reviewers, and editors regarding the acceptable and prohibited use of generative AI in manuscript preparation, peer review, editorial handling, and publication.
1. Purpose of the Policy
This policy aims to ensure that the use of generative AI and AI-assisted technologies supports academic quality without compromising research integrity, authorship accountability, originality, confidentiality, transparency, data protection, and ethical responsibility. JEBI supports responsible technological innovation while maintaining strict standards for scholarly contribution, publication ethics, peer-review integrity, and editorial independence.
2. Definition of Generative AI and AI-Assisted Technologies
Generative AI refers to artificial intelligence systems capable of producing, transforming, summarizing, translating, analyzing, or organizing content based on user prompts. This may include text, images, tables, computer code, statistical interpretation, summaries, references, visual materials, or other forms of academic content. AI-assisted technologies include tools used for language editing, grammar correction, translation support, reference organization, coding assistance, data processing support, image generation, and other activities that may influence the preparation or presentation of scholarly work.
3. Use of Generative AI by Authors
Authors may use generative AI or AI-assisted technologies only as supporting tools in the preparation of a manuscript. Acceptable uses may include improving language clarity, grammar, readability, structure, formatting, translation support, coding assistance, preliminary idea organization, and non-substantive editorial refinement. Such use must not replace the author’s intellectual contribution, methodological responsibility, critical reasoning, interpretation of findings, or scholarly judgment.
Authors remain fully responsible for the accuracy, originality, validity, integrity, ethical compliance, and scholarly quality of all submitted and published content. Any AI-assisted output must be carefully reviewed, verified, corrected, edited, and approved by the authors before submission. Authors must ensure that AI tools do not introduce fabricated information, inaccurate claims, unsupported interpretations, false references, plagiarism, bias, or misleading content.
4. AI Tools Cannot Be Listed as Authors
Generative AI tools, large language models, chatbots, software systems, or any non-human technologies cannot be listed as authors or co-authors. Authorship requires human accountability, including responsibility for the integrity of the work, approval of the final manuscript, disclosure of conflicts of interest, response to reviewer comments, and responsibility for ethical compliance. Since AI tools cannot assume these responsibilities, they do not meet the criteria for authorship.
5. Mandatory Disclosure of AI Use
Authors must disclose the use of generative AI or AI-assisted technologies when such tools are used beyond basic spelling, grammar, formatting, or reference management. Disclosure is required when AI tools contribute to text generation, translation, summarization, data interpretation support, literature mapping, coding assistance, figure preparation, image generation, or other substantive elements of the manuscript.
The disclosure should be included in a dedicated statement in the manuscript, preferably before the reference list or in another section required by the journal. The statement must identify the tool used, describe the purpose of use, and confirm that the authors reviewed, edited, verified, and approved the final content.
Suggested Disclosure Statement:
During the preparation of this manuscript, the author(s) used [name of AI tool/service] for [specific purpose, such as language editing, translation support, idea organization, coding assistance, or readability improvement]. After using this tool/service, the author(s) reviewed, edited, verified, and approved the content. The author(s) take full responsibility for the final version of the manuscript.
6. Permitted Use of AI-Assisted Technologies
- Language editing, grammar correction, and readability improvement.
- Translation support, provided that the final meaning is verified by the authors.
- Formatting assistance, structure refinement, and non-substantive editorial polishing.
- Assistance with coding, data cleaning, or technical workflow, provided that all outputs are validated.
- Preliminary literature organization, provided that all sources and references are independently verified.
- Preparation of visual or graphical materials, only when disclosed and when not misleading or used as research evidence.
7. Prohibited Uses of Generative AI
- Using AI to fabricate data, findings, citations, quotations, references, respondents, ethical approval, or research evidence.
- Submitting AI-generated text without human verification, scholarly contribution, and proper disclosure.
- Using AI to manipulate images, figures, tables, datasets, or research results in a misleading manner.
- Using AI to generate false references, unverifiable sources, inaccurate literature claims, or unsupported theoretical arguments.
- Using AI to obscure plagiarism, duplicate publication, salami publication, text recycling, or unethical authorship practices.
- Using AI to replace the author’s responsibility for research design, analysis, interpretation, argumentation, conclusion, and scientific judgment.
- Using hidden prompts, invisible text, or prompt-injection techniques to manipulate peer review, editorial screening, indexing systems, or automated manuscript checks.
8. AI-Generated Images, Figures, and Visual Materials
The use of AI-generated or AI-modified images, figures, graphical abstracts, diagrams, illustrations, or visual materials must be disclosed clearly. Authors must ensure that such materials do not misrepresent data, create false evidence, alter research results, violate copyright, infringe privacy, or mislead readers.
AI-generated images or visual materials involving identifiable persons, sensitive content, confidential materials, copyrighted images, empirical evidence, or manipulated research data may be rejected if they compromise ethical standards, data integrity, transparency, or scholarly reliability.
9. Verification of Data, Sources, and References
Authors are responsible for verifying all AI-assisted outputs, including facts, citations, references, quotations, equations, tables, data summaries, statistical interpretations, theoretical claims, and methodological statements. AI-generated references must not be included unless they are independently verified against reliable bibliographic databases, publisher websites, DOI records, official repositories, or other authoritative sources.
10. Use of Generative AI by Reviewers
Reviewers must treat manuscripts, supplementary files, reviewer reports, author responses, editorial correspondence, and unpublished research materials as confidential documents. Reviewers must not upload submitted manuscripts, unpublished data, figures, tables, supplementary materials, review reports, or editorial correspondence into public or third-party generative AI tools.
Reviewers must not rely on generative AI to perform scientific assessment, evaluate novelty, judge methodology, interpret results, assess theoretical contribution, or make review recommendations. Peer review requires expert human judgment, confidentiality, critical reasoning, accountability, and subject expertise. Any concern related to possible AI misuse in a manuscript should be reported confidentially to the handling editor.
11. Use of Generative AI by Editors
Editors must protect the confidentiality of submitted manuscripts, author information, reviewer identities, reviewer reports, editorial correspondence, decision letters, and unpublished research materials. Editors must not upload confidential manuscript content, reviewer reports, author responses, or editorial decision materials into public or third-party generative AI tools.
Editorial decisions must be made by human editors based on journal scope, reviewer recommendations, ethical assessment, originality, methodological rigor, contribution to the field, and compliance with publication standards. AI tools must not replace editorial judgment or be used as the sole basis for acceptance, rejection, revision, or ethical action.
12. Confidentiality and Data Protection
Manuscripts under review are confidential documents. Authors, reviewers, and editors must ensure that the use of AI tools does not violate confidentiality, intellectual property rights, personal data protection, unpublished research ownership, institutional policies, or legal obligations. Any unauthorized disclosure of manuscript content through AI tools may be treated as a breach of publication ethics.
13. Editorial Screening and AI-Related Concerns
JEBI may conduct editorial checks for originality, similarity, citation reliability, image integrity, ethical compliance, data validity, and possible misuse of AI-generated content. The journal does not rely solely on automated AI-detection tools, as such tools may produce uncertain, incomplete, or inaccurate results. Editorial assessment will consider manuscript quality, transparency, coherence, verifiability, ethical compliance, author disclosure, and the presence of any evidence suggesting fabrication or manipulation.
14. Failure to Disclose or Misuse of AI
Failure to disclose substantial AI use, submission of unverified AI-generated content, fabrication of references or data, misuse of AI-generated images, manipulation of peer review, or violation of confidentiality may result in editorial action. Depending on the severity of the case, actions may include request for clarification, manuscript revision, rejection, correction, expression of concern, retraction, notification to institutions, or other measures consistent with publication ethics standards.
15. Author Responsibility Statement
By submitting a manuscript to JEBI, authors confirm that they are fully responsible for the accuracy, originality, integrity, validity, transparency, and ethical compliance of the submitted work. Authors also confirm that any use of generative AI or AI-assisted technologies has been properly disclosed, critically reviewed, verified, edited, and approved by all authors before submission.




