Plagiarism and AI Thresholds in Academic Theses: A Practical and Positive Framework for Quality Assurance
- Apr 18
- 5 min read
Academic theses are expected to show original thinking, honest research, and responsible use of sources. In recent years, the growth of artificial intelligence tools has created new questions about authorship, similarity, and academic integrity. This article presents a simple framework for evaluating plagiarism and AI-related similarity in theses using three thresholds: less than 10% as acceptable, 10–15% as needing evaluation, and above 15% as fail. The article explains why thresholds matter, how they should be applied, and why human judgment remains essential. It also discusses how institutions can support students through clear policies, education, and fair review systems. The purpose is not to punish students, but to improve quality, trust, and academic standards in a positive and balanced way.
Introduction
Academic writing is built on trust. A thesis should reflect the student’s own understanding, research effort, analysis, and conclusions. At the same time, all research depends on previous knowledge, so some level of similarity is normal. This is why similarity reports and AI detection discussions must be handled carefully.
Today, many students use digital tools for grammar correction, translation support, idea generation, and language improvement. These tools can be helpful when used ethically. However, problems begin when a thesis contains copied text, weak citation practices, or large sections produced without real intellectual contribution from the student. Because of this, many academic systems now combine plagiarism checks with closer review of AI-related writing patterns.
A practical standard can help institutions make fair decisions. One useful model is: less than 10% similarity as acceptable, 10–15% as needing evaluation, and above 15% as fail. This model supports quality assurance while leaving room for academic judgment.
Literature Review
Research on plagiarism has long shown that not all similarity is misconduct. Some repeated language appears in titles, methods sections, legal terminology, or common academic phrases. For this reason, experts in academic quality often warn against relying only on percentages. A number alone cannot explain whether the similarity comes from properly cited material, technical vocabulary, or direct copying without attribution.
The discussion has become more complex with the rise of AI writing tools. Recent academic literature suggests that AI can assist students in language polishing and structure planning, but it also creates risks. These include generic writing, weak originality, fabricated references, and reduced critical thinking. Scholars increasingly agree that the main issue is not only whether AI was used, but how it was used.
Examples from international higher education settings show different approaches. In some cases, a thesis with low similarity is accepted after routine review. In other cases, a medium similarity score leads to manual checking, oral defense questions, or requests for revision. In stricter systems, very high similarity often triggers rejection because it raises serious concerns about originality and authorship. These examples show that a threshold system works best when combined with human evaluation.
Methodology
This article uses an analytical and policy-based approach. It reviews the idea of plagiarism thresholds and applies them to a practical thesis evaluation framework. The method is based on three principles: clarity, fairness, and academic integrity.
First, the framework separates low, moderate, and high similarity ranges. Second, it recognizes that AI use should be judged by context, not by fear. Third, it assumes that thesis review should include both software tools and expert academic review.
The proposed standard is as follows:
Less than 10% = Acceptable
10–15% = Needs Evaluation
Above 15% = Fail
This model is designed to be simple enough for students to understand and strong enough for institutions to apply consistently.
Analysis
The first threshold, less than 10%, is generally acceptable because a small amount of similarity is normal in academic writing. Standard definitions, commonly used phrases, and correctly cited short passages may appear in any thesis. In this range, the main assumption is that the work is substantially original.
The second threshold, 10–15%, should not lead automatically to punishment. Instead, it should trigger a deeper review. The examiner may check where the similarity appears, whether sources are properly cited, and whether the student can explain the work in a viva or defense. This range is important because it allows fairness. A thesis may need only correction, revision, or better referencing.
The third threshold, above 15%, indicates a serious risk to academic quality. At this level, the amount of copied or highly unoriginal material may be too high for a thesis to be considered reliable as independent academic work. In a positive quality culture, this outcome should still be handled professionally. The student should receive feedback, explanation, and where possible a path for improvement in future work.
AI creates an additional layer. A thesis may show low plagiarism but still feel overly artificial, generic, or unsupported by real understanding. Therefore, AI evaluation should include signs such as inconsistent style, weak analysis, false references, and inability of the student to explain the content. This is why human review remains necessary even when software is used.
Findings
Several key findings emerge from this framework. First, percentage thresholds are useful as a guide, not as a complete judgment tool. Second, the middle category is essential because it protects students from unfair automatic decisions. Third, high similarity levels usually show a clear need for rejection or major rework. Fourth, AI-related concerns should be treated as part of academic quality review, not as a separate panic issue.
A positive and educational approach is the most effective one. Institutions that clearly teach citation, paraphrasing, research ethics, and responsible AI use are more likely to achieve strong thesis quality. Students perform better when they understand expectations early and receive support before submission.
Conclusion
Plagiarism and AI thresholds in academic theses must be managed with balance, clarity, and fairness. A three-level standard—less than 10% acceptable, 10–15% needs evaluation, and above 15% fail—offers a practical system for protecting academic standards while supporting student development. It is simple, understandable, and useful for quality assurance.
At the same time, no percentage should replace expert academic judgment. A strong thesis is not only about low similarity; it is about originality, understanding, responsible source use, and authentic contribution. In the age of AI, this human dimension is more important than ever. The best path forward is not fear, but better guidance, better review, and stronger academic culture.

References
Pecorari, D. Academic Writing and Plagiarism: A Linguistic Analysis. Continuum.
Carroll, J. A Handbook for Deterring Plagiarism in Higher Education. Oxford Centre for Staff and Learning Development.
Bretag, T. (Ed.). Handbook of Academic Integrity. Springer.
Park, C. “In Other (People’s) Words: Plagiarism by University Students—Literature and Lessons.” Assessment & Evaluation in Higher Education.
Eaton, S. E. Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity. ABC-CLIO.
Selwyn, N. Should Robots Replace Teachers? AI and the Future of Education. Polity.
Bearman, M., Ryan, J., and Ajjawi, R. “Discourses of Artificial Intelligence in Higher Education: A Critical Literature Review.” Higher Education Research & Development.
Dwivedi, Y. K. et al. “So What if ChatGPT Wrote It? Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative AI for Research, Practice and Policy.” International Journal of Information Management.
Hashtags



Comments