International Journal of Engineering
Trends and Technology

Research Article | Open Access | Download PDF
Volume 74 | Issue 3 | Year 2026 | Article Id. IJETT-V74I3P104 | DOI : https://doi.org/10.14445/22315381/IJETT-V74I3P104

A Systematic Survey of AI-Generated Text Detection, Humanization, and Grammar Correction Techniques


Varsha S. Pimprale, Mahendra Deore

Received Revised Accepted Published
08 Jan 2026 24 Feb 2026 28 Feb 2026 28 Mar 2026

Citation :

Varsha S. Pimprale, Mahendra Deore, "A Systematic Survey of AI-Generated Text Detection, Humanization, and Grammar Correction Techniques," International Journal of Engineering Trends and Technology (IJETT), vol. 74, no. 3, pp. 43-53, 2026. Crossref, https://doi.org/10.14445/22315381/IJETT-V74I3P104

Abstract

There is considerable justification for the worries regarding the semantic accuracy and authenticity of scholarly and professional documents created utilizing new NLP technologies and more widespread usage of LLMs, as these technologies quickly develop and will continue to evolve. In this work, an ensemble approach shall be explored, taking into consideration the aspect of dealing with challenges introduced by ensuring a three-fold system, AI detection, machine text humanization, as well as context grammar refinement by using diverse models of Artificial Intelligence. At present, with respect to recent technologies, the approach incorporates transforming models to ensure a fine line-by-line assessment in defining the level at which the original document has contributions in terms of original work versus work produced by Artificial Intelligence models. For this, an extensive survey would also be included, indicating that an end-to-end processing approach is required for text processing at present or in the near future. In the future, this work can also be expanded on LLM development with respect to increased scalability for processing digital works.

Keywords

AI-Generated Text Detection, Grammar Correction, Large Language Models, Natural Language Processing, Text Humanization.

References

[1]  Vinu Sankar Sadasivan et al., “Can AI-Generated Text be Reliably Detected?,” arXiv preprint, pp. 1-37, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[2] John Kirchenbauer et al., “Watermarking Language Models for Responsible AI,” Proceedings of the 40th International Conference on Machine Learning, PMLR, Honolulu, Hawaii, USA, pp. 17061-17084, 2023.
[
Google Scholar] [Publisher Link]

[3] Ganesh Jawahar, Muhammad Abdul-Mageed, and V.S. Laks Lakshmanan, “Automatic Detection of Machine-Generated Text: A Survey,” Proceedings of the 28th International Conference on Computational Linguistics, International Committee on Computational Linguistics, Barcelona, Spain, pp. 2296-2309, 2020.
[
CrossRef] [Google Scholar] [Publisher Link]

[4] Eric Mitchell et al., “DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature,” Proceedings of the 40th International Conference on Machine Learning, JMLR.org, Honolulu, Hawaii, USA, pp. 24950-24962, 2023.
[
Google Scholar] [Publisher Link]

[5] Sebastian Gehrmann, Hendrik Strobelt, and Alexander Rush, “GLTR: Statistical Detection and Visualization of Generated Text,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Association for Computational Linguistics, Florence, Italy, pp. 111-116, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[6] Souradip Chakraborty et al., “On the Possibilities of AI-Generated Text Detection,” arXiv preprint, pp. 1-29, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[7] Tanzila Kehkashan et al., “AI-Generated Text Detection: A Comprehensive Review of Methods, Datasets, and Applications,” Computer Science Review, vol. 58, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[8] Baixiang Huang, Canyu Chen, and Kai Shu, “Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges,” ACM SIGKDD Explorations Newsletter, vol. 26, no. 2, pp. 21-43, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[9] Rowan Zellers et al., “Defending Against Neural Fake News,” Advances in Neural Information Processing Systems (NeurIPS): 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, vol. 32, pp. 1-12, 2019.
[
Google Scholar] [Publisher Link]

[10] Tom Brown et al., “Language Models Are Few-Shot Learners,” Advances in Neural Information Processing Systems: 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada, vol. 33, pp. 1877-1901, 2020.
[
Google Scholar] [Publisher Link]

[11] Alec Radford et al., “Language Models are Unsupervised Multitask Learners,” OpenAI Technical Report, 2019.
[
Google Scholar] [Publisher Link]

[12] Jingqing Zhang et al., “PEGASUS: Pre-Training with Extracted Gap-Sentences for Abstractive Summarization,” Proceedings of the 37th International Conference on Machine Learning, pp. 11328-11339, 2020.
[
CrossRef] [Google Scholar] [Publisher Link]

[13] Colin Raffel et al., “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer,” The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485-5551, 2020.
[
Google Scholar] [Publisher Link]

[14] Kostiantyn Omelianchuk et al., “GECToR-Grammatical Error Correction: Tag, not Rewrite,” Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, Seattle, WA, USA, pp. 163-170, 2020.
[
CrossRef] [Google Scholar] [Publisher Link]

[15] Hendrik Strobelt et al., “Seq2seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 1, pp. 353-363, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[16] Antônio Junior Alves Caiado, and Michael Hahsler, “AI Content Self-Detection for Transformer-based Large Language Models,” arXiv preprint, pp. 1-12, 2023.
[
CrossRef] [Google Scholar] [Publisher Link]

[17] Abubakar Abid et al., “Gradio: Hassle-Free Sharing and Testing of Machine Learning Models in the Wild,” arXiv preprint, pp. 1-6, 2019.
[
CrossRef] [Google Scholar] [Publisher Link]

[18] Wanyi Feng, “Identifying AI-Generated Text Sources via Linguistic Style Fingerprints,” 2025 5th International Conference on Computer Science and Blockchain (CCSB), Shenzhen, China, pp. 154-157, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[19]  Jake Bruce et al., “Genie: Generative Interactive Environments,” arXiv preprint, pp. 1-27, 2024.
[
CrossRef] [Google Scholar] [Publisher Link]

[20]  Chandni Magoo, and Manjeet Singh, “A Novel Paraphrase Generation Model using Semantically and Syntactically Controlled Structures,” Neural Processing Letters, vol. 57, no. 6, pp. 1-23, 2025.
[
CrossRef] [Google Scholar] [Publisher Link]

[21] Krzysztof Pająk, and Dominik Pająk, “Multilingual Fine-Tuning for Grammatical Error Correction,” Expert Systems with Applications, vol. 200, 2022.
[
CrossRef] [Google Scholar] [Publisher Link]

[22] Xin Sun et al., “A Unified Strategy for Multilingual Grammatical Error Correction with Pre-Trained Cross-Lingual Language Model,” Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, pp. 4367-4374, 2022.
[
CrossRef] [Google Scholar] [Publisher Link]

[23] L.D.M.S. Sai Teja et al., “Modeling the Attack: Detecting AI-Generated Text by Quantifying Adversarial Perturbations,” 2026 20th International Conference on Ubiquitous Information Management and Communication (IMCOM), Hanoi, Vietnam, pp. 1-8, 2026.
        [
CrossRef] [Google Scholar] [Publisher Link]