Students are potentially writing millions of papers with AI

[ad_1]

New data released by plagiarism detection company Turnitin shows that students submitted more than 22 million papers last year that may have used generative AI.

A year ago, Turnitin launched an AI writing detection tool that was trained on papers written by students, as well as other AI-generated text. Since then, more than 200 million papers have been reviewed by the detector, primarily written by high school and college students. Turnitin found that 11 percent of the papers may contain AI-written language as much as 20 percent of the content, with 3 percent of the total papers reviewed being flagged for 80 percent or more AI writing. (Turnitin is owned by Advance, which also owns WIRED's publisher Condé Nast.) Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.

ChatGPT was launched with the fear that English class essays would be eliminated. A chatbot can synthesize and distill information quickly – but that doesn't mean it always gets it right. Generative AI has been known to hallucinate, create its own facts, and cite academic references that don't actually exist. Generic AI chatbots have also been caught spewing biased texts on gender and race. Despite those flaws, students have used the chatbot for research, organizing ideas, and as a ghostwriter. Traces of chatbots have also been found in peer-reviewed, published academic writing.

Educators apparently want to hold students accountable for using generic AI without permission or disclosure. But there is a need for a reliable way to prove that AI was used in a given assignment. Instructors have at times tried to figure out AI as written, using haphazard, untested methods to enforce rules, and finding their own solutions that upset students. Further complicating the problem, some teachers are also using generative AI in their grading processes.

The use of General AI is difficult to detect. It is not as simple as marking plagiarism, because the text generated is still the original text. Additionally, there are nuances to how students use General AI; Some people may ask chatbots to write large volumes or complete papers for them, while others may use the tool as assistance or a brainstorming partner.

Students are not only attracted by ChatGPIT and similar large language models. So-called word spinners are another type of AI software that rewrites text, and this can make it less obvious to a teacher that the work was plagiarized or generated by AI. Turnitin's AI detector has also been updated to detect word spinners, says Anne Cecchitelli, the company's chief product officer. It can also flag work that was rewritten by spell checker services like Grammarly, which now has its own generative AI tool. As familiar software increasingly adds generic AI components, it becomes more confusing what students can and can't use.

Testing instruments themselves are prone to bias. English language learners may be more likely to find them frustrating; A 2023 study found a 61.3 percent false positive rate when evaluating the Test of English as a Foreign Language (TOEFL) test with seven different AI detectors. The version of Turnitin was not examined in the study. The company says it has trained its detector to write on English language learners as well as native English speakers. A study published in October found that Turnitin was the most accurate out of 16 AI language detectors in a test in which the tool examined graduate papers and AI-generated papers.