Publication | Open Access
Testing of detection tools for AI-generated text
332
Citations
18
References
2023
Year
Recent advances in generative pre‑trained transformer language models have highlighted the risk of unfair use of AI‑generated content in academia and spurred efforts to develop detection solutions. This study examines the functionality of detection tools for AI‑generated text, evaluating accuracy and error types, and investigates whether such tools can reliably distinguish human from ChatGPT text and how translation or obfuscation affect detection. The authors evaluated 12 publicly available tools plus Turnitin and PlagiarismCheck, summarised related research, and performed a comprehensive test using a rigorous methodology and a broad tool set. They found that the tools are neither accurate nor reliable, tend to misclassify AI output as human, and that content obfuscation further degrades performance, raising concerns about their use in academic settings.
Abstract Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artificial intelligence (AI) generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for AI-generated text and evaluates them based on accuracy and error type analysis. Specifically, the study seeks to answer research questions about whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text, and whether machine translation and content obfuscation techniques affect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. The study makes several significant contributions. First, it summarises up-to-date similar scientific and non-scientific efforts in the field. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.
| Year | Citations | |
|---|---|---|
Page 1
Page 1