Artificial intelligence has significantly benefited humanity across several aspects of life for an extended period. Artificial intelligence (AI) has numerous advantages; yet, it is not infallible and can commit faults that may directly impact humans. To have a clearer understanding of these deficiencies, let us examine the case of Moira Olmsted with AI detectors.
The Incident: Moira Olmsted's Experience
Moira Olmsted, a 24-year-old Central Methodist University student, faced a major challenge when an automatic detection algorithm labeled her writing AI. Olmsted was accused weeks into the autumn semester of 2023 while navigating academics, a full-time job, a child, and pregnancy. Her scholastic position and trajectory were threatened by the highlighted assignment's zero mark.
The claim deeply impacted Olmsted, who writes formulaically owing to her autism spectrum disorder. Due to her busy schedule, writing assignments were already difficult, but this allegation added to them. She contacted her lecturer and university officials to dispute the claim. After a harsh warning from her professor, her grade was changed. Any subsequent flagging will be considered plagiarism. She felt uneasy about finishing her degree after the encounter. She began recording herself while completing homework and monitoring her Google Docs changes to prevent future false accusations. This extra effort strained her already heavy workload, affecting her academic and personal life.
Olmsted’s assignment that was flagged as likely written by AI. Photographer: Nick Oxford/Bloomberg
The Ascendance of AI Detection in Educational Institutions
Following the launch of OpenAI's ChatGPT, educational institutions have been striving to adjust to the emerging landscape of generative AI. AI-generated content has prompted worries regarding academic integrity, leading numerous educators to employ AI detection systems, including Turnitin, GPTZero, and Copyleaks, to detect probable AI-generated material in student submissions. A poll conducted by the Center for Democracy & Technology indicates that almost two-thirds of educators utilize an AI checker on a regular basis. The objective is to maintain academic integrity; nevertheless, these instruments are not infallible, and erroneous allegations have become a growing concern.
The swift implementation of AI detection techniques is a component of a larger initiative by educational institutions to regulate student evaluations. As AI-generated content becomes increasingly prevalent, educators face pressure to verify the authenticity of students' work. Nevertheless, these technologies frequently lack the sophisticated comprehension required to precisely ascertain if a text is produced by a human or generated by AI, resulting in instances such as Olmsted's.
Inaccurate Allegations and Their Consequences
Bloomberg Businessweek recently evaluated two prominent AI detectors using 500 college essays from Texas A&M University, composed prior to the launch of ChatGPT. The investigation revealed that the detectors erroneously classified 1% to 2% of these human-authored pieces as AI-generated. Despite appearing minor, this error rate can result in significant repercussions for students such as Olmsted, whose academic achievement relies on their capacity to demonstrate integrity. A solitary erroneous allegation can affect a student's academic performance, reputation, and potential to graduate.
Source: Bloomberg Analysis of Texas A&M, GPTZero, CopyLeaks
The emotional impact of false allegations is substantial. Students who are falsely accused may endure protracted procedures to establish their innocence, which may entail consultations with professors, submission of evidence regarding their writing process, and potentially appealing to higher institutional authorities. This procedure can be arduous and disheartening, particularly for students who are already managing numerous obligations. The apprehension of being detected once more may result in alterations to writing practices, as students could eschew specific terminology or syntactic structures they believe could activate AI detectors.
A Climate of Fear in Educational Settings
This reliance on AI detection has created a suspicious and anxious classroom. Many students worry about activating AI detection systems when using writing apps like Grammarly. Many students use Grammarly to improve their writing. However, certain AI detectors may misinterpret these methods as AI generation.
After finding Grammarly could generate AI-generated work, Florida SouthWestern State College student Kaitlyn Abellar uninstalled it. This fear of using useful writing tools limits students' ability to improve and reduces their confidence in using them. Students prioritize avoiding cheating allegations over learning and personal growth.
The dread culture goes beyond writing. Many students feel the need to legitimize their work beyond realistic standards. To avoid charges, they may document their writing process, take screenshots, or record themselves while completing projects. This culture of distrust can hinder education since students prioritize maintaining their integrity over learning and improving.
A Vision for Tomorrow
For students like Olmsted, the aspiration is for education to focus less on evading false allegations and more on the educational experience itself—one in which technology enhances, rather than detracts from, their accomplishments. The incorporation of AI in education can improve learning, if it is executed with careful consideration and awareness of its constraints.
In the future, educational institutions and instructors must collaborate to establish policies that are equitable, transparent, and conducive to the welfare of all students. This entails reevaluating the application of AI detection systems and exploring alternate methodologies that prioritize education over punitive measures. By cultivating a culture of trust and collaboration, the education system can guarantee that technology serves to empower students rather than impede their achievement.
Olmsted's narrative underscores the significance of empathy and comprehension in the realm of education. As technology advances, our methods of supporting students must also adapt. By emphasizing justice, equity, and a sincere dedication to education, educators may facilitate opportunities for all children to achieve, irrespective of the obstacles they encounter.