AI Detectors
Author: Michelle Kassorla, Ph.D
AI in Higher Ed Keynote Speaker • Principle Author of “AI Literacy in Teaching and Learning” • Associate Prof. of English at GSU | Perimeter College • AI Expert Panelist • 20+ Years in Higher Ed
Her References
OK. I’ve seen a lot of stuff about “AI Detectors” lately. Seriously, if you are still using an AI Detector, you are hurting yourself and your students. AI Detectors are:
- Completely fallible—both directions. AI Detectors will tell you a student isn’t using AI when they are, and they will tell you that a student is using AI when they aren’t.
- Easily fooled.
- Discriminatory against ESL, disabled, and economically disadvantaged students.
- Create an environment of mistrust that is not conducive to learning.
Here are the receipts:
“AI Detectors Don’t Work. Here’s What to Do Instead.” MIT Sloan Teaching & Learning Technologies, https://lnkd.in/eNbAfYED. Accessed 16 Feb. 2025.
Dugan, Liam, et al. RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors. arXiv:2405.07940, arXiv, 10 June 2024. arXiv.org, https://lnkd.in/e7z4racy.
Elkhatat, Ahmed M., et al. “Evaluating the Efficacy of AI Content Detection Tools in Differentiating between Human and AI-Generated Text.” International Journal for Educational Integrity, vol. 19, no. 1, 1, Sept. 2023, pp. 1–16. link.springer.com, https://lnkd.in/e6izJzkm.
Giray, Louie, et al. “Beyond Policing: AI Writing Detection Tools, Trust, Academic Integrity, and Their Implications for College Writing.” Internet Reference Services Quarterly, vol. 29, no. 1, Jan. 2025, pp. 83–116. Taylor and Francis+NEJM, https://lnkd.in/e53dW9GN.
Krishna, Kalpesh, et al. Paraphrasing Evades Detectors of AI-Generated Text, but Retrieval Is an Effective Defense. proceedings.neurips.cc, https://lnkd.in/e2V6ipxv. Accessed 5 Sept. 2024.
Liang, Weixin, et al. “GPT Detectors Are Biased against Non-Native English Writers.” Patterns, vol. 4, no. 7, 2023. Google Scholar, https://lnkd.in/eeCM8fnG.
Perkins, Mike, et al. Data Files: GenAI Detection Tools, Adversarial Techniques and Implications for Inclusivity in Higher Education. Mar. 2024. data.mendeley.com, https://lnkd.in/eSRaksJi.
Rivero, Victor. “Beyond AI Detection: Rethinking Our Approach to Preserving Academic Integrity.” EdTech Digest, 5 Nov. 2024, https://lnkd.in/eGgQBXM5.
Sadasivan, Vinu Sankar, et al. Can AI-Generated Text Be Reliably Detected? arXiv:2303.11156, arXiv, 19 Feb. 2024. arXiv.org, https://lnkd.in/eCBvVPQy.
Weber-Wulff, Debora, et al. “Testing of Detection Tools for AI-Generated Text.” International Journal for Educational Integrity, vol. 19, no. 1, Dec. 2023, p. 26. arXiv.org, https://lnkd.in/e-uJDHbp.
AI Detectors Don’t Work. Here’s What to Do Instead.
https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-work/
At a Glance
As AI tools like ChatGPT gain popularity on campus, instructors face new questions around academic integrity. Some worry that they could inadvertently give higher grades to students who use AI compared to those who don’t use AI for coursework. Others are concerned that reliance on AI tools could hinder students’ development of critical thinking skills. Whether or not you integrate these technologies into your courses, it’s important to reflect on how you’ll address them with students. How can you foster academic honesty and critical thinking when every student has easy access to generative AI?
In response to these concerns, some companies have developed “AI detection” software. This software aims to flag AI-generated content in student work. However, AI detection software is far from foolproof—in fact, it has high error rates and can lead instructors to falsely accuse students of misconduct (Edwards, 2023; Fowler, 2023). OpenAI, the company behind ChatGPT, even shut down their own AI detection software because of its poor accuracy (Nelson, 2023).
In this guide, we’ll go beyond AI detection software. We’ll discuss how clear guidelines, open dialogue with students, creative assignment design, and other strategies can promote academic honesty and critical thinking in an AI-enabled world.
Set Clear Policies and Expectations
It’s important to be clear with your students about if, when, and how they should use AI in your courses (Eberly Center, n.d.; Schmidli et al., 2023). Here are some potential strategies:
- Announce your policies on AI use both in person and in writing. First, make sure to talk about these policies with your students during class at the beginning of the semester. It’s also essential to include the policies in your syllabus and course site (as recommended in MIT Sloan’s Generative AI Guiding Principles) so students can easily go back and reference your expectations (Teaching + Learning Lab, n.d.-b).
- Provide definitions of key terms like plagiarism and cheating in the context of generative AI tools.
- Share clear examples of appropriate versus inappropriate AI applications for specific tasks (Eberly Center, n.d.; Columbia Center for Teaching and Learning, n.d.). For example, you might allow students to use ChatGPT to brainstorm ideas or review grammar, but not to generate significant portions of essay content.
Setting clear expectations from the start can help you guide appropriate use of generative AI tools. Furthermore, by aligning our policies and practices with MIT Sloan’s Values, we can foster a culture of academic honesty and ethical leadership even as new technologies emerge.
Promote Transparency and Dialogue
In addition to transparent policies, you can support academic integrity through open conversations with your students. Consider these possible approaches:
- Hold class discussions where students can ask questions and share their perspectives about AI tools (Stanford Center for Teaching and Learning, 2023).
- Explain the rationale behind your AI policies so students understand that the goal is to facilitate meaningful learning—not just enforce compliance (Teaching + Learning Lab, n.d.-a)
- If your students will be using generative AI tools, establish clear expectations in terms of how they’ll acknowledge and cite their use of these technologies (McAdoo, 2023). Note that OpenAI’s terms of use state that users may not “represent that output from the Services was human-generated when it is not.”
Open conversations can help you build trust with your students and learn from them as partners as we navigate these new challenges together.
Foster Intrinsic Motivation
Thoughtfully designed assignments can reduce the temptation to misuse AI by sparking students’ intrinsic motivation. These are some research-backed strategies for enhancing student engagement:
- Help students understand how completing a given assignment will support their learning (CAST, n.d.-a).
- Allow students flexibility to incorporate their interests and creativity through choices in project formats, topics, and methods (Usable Knowledge, 2016; CAST, n.d.-b).
- Build in opportunities for self-reflection and metacognition—for example, by asking students to reflect on what they’ve learned and how they learned it (Smith & Darvas, 2017).
- Scaffold assignments by breaking them into smaller pieces that build on one another—for example, asking students to submit an outline before writing their final paper (Sotiriadou et al., 2019; Eberly Center, n.d.).
- Give students opportunities to revise their work based on feedback before grading (Columbia Center for Teaching and Learning, n.d.).
- Connect assignments to real-world contexts and applications that are meaningful for your students (Sotiriadou et al., 2019; CAST, n.d.-c).
No assignment design can prevent all improper use of tools. However, instructors can thoughtfully shape activities to motivate meaningful effort.
Ensure Inclusive Teaching
If you don’t want students using AI in your course, it can be tempting to revert to analog forms of assessment. However, relying on handwritten exams, in-class writing, or oral presentations can raise equity concerns (Ceres, 2023):
- Timed, hand-written exams may present a distracting challenge for most students, since few today are accustomed to composing and writing by hand. This format can especially disadvantage those who are unable to hand-write quickly (Tai et al., 2022)
- Oral presentations put extra stress on students with anxiety and non-native English speakers, who may then face additional challenges that their peers do not (Grieve et al., 2021).
- In-class writing assignments might not fairly assess all students’ written communication skills, especially if they don’t have the chance to revise their work.
Prioritizing student success means creating an environment where everyone has an equitable opportunity to demonstrate their capabilities. Ultimately, using a mix of assessment approaches in your course is the best way to maximize equity and inclusion (Centre for Teaching and Learning, n.d.; Eberly Center, n.d.).
Conclusion
As the educational landscape evolves with new generative AI tools, remember that the heart of teaching and learning is undeniably human. By proactively establishing clear policies around the use of AI in your course, you can help students use AI responsibly. By engaging in open dialogue, you can encourage them to think critically about how and when they use these tools. By designing assignments that align with students’ interests and goals, you can make learning experiences more meaningful. And by adopting fair assessment methods, you can give every student the opportunity to showcase their skills.
Generative AI tools will affect how today’s students experience education. However, it’s still the authentic, human-centered learning experiences that will stand out and leave a lasting impact on students.
References
CAST. (n.d.-a). UDL: Clarify the meaning and purpose of goals. UDL Guidelines. https://udlguidelines.cast.org/engagement/effort-persistence/goals-objectives
CAST. (n.d.-b). UDL: Optimize choice and autonomy. UDL Guidelines. https://udlguidelines.cast.org/engagement/recruiting-interest/choice-autonomy
CAST. (n.d.-c). UDL: Optimize relevance, value, and authenticity. UDL Guidelines. https://udlguidelines.cast.org/engagement/recruiting-interest/relevance-value-authenticity
Centre for Teaching and Learning. (n.d.) IncludED: A guide to designing inclusive assessments. University of Oxford Centre for Teaching and Learning. https://ctl.ox.ac.uk/included-designing-inclusive-assessments
Ceres, P. (2023, January 26). ChatGPT is Coming for Classrooms. Don’t Panic. Wired. https://www.wired.com/story/chatgpt-is-coming-for-classrooms-dont-panic
Columbia Center for Teaching and Learning. (n.d.). Promoting academic integrity. Columbia University. https://ctl.columbia.edu/resources-and-technology/resources/academic-integrity
Eberly Center. (n.d.). Generative AI Tools FAQ. Carnegie Mellon University. https://www.cmu.edu/teaching/technology/aitools/index.html
Edwards, B. (2023, July 14). Why AI detectors think the US Constitution was written by AI. Ars Technica. https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai
Fowler, G. A. (2023, April 14). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin
Grieve, R., Woodley, J., Hunt, S. E., & McKay, A. (2021). Student fears of oral presentations and public speaking in higher education: A qualitative survey. Journal of Further and Higher Education, 45(9), 1281-1293. https://doi.org/10.1080/0309877x.2021.1948509
McAdoo, T. (2023, April 7). How to cite ChatGPT. APA Style. https://apastyle.apa.org/blog/how-to-cite-chatgpt
Nelson, J. (2023, July 24). OpenAI quietly shuts down its AI detection tool. Decrypt. https://decrypt.co/149826/openai-quietly-shutters-its-ai-detection-tool
Schmidli, L., Harris, M., Caffrey, A., Caloro, A., Klein, J., Loya, L., Macasaet, D., Schock, E., & Story, P. (2023, January 5). Considerations for using AI in the classroom. L&S Instructional Design Collaborative at the University of Wisconsin Madison. https://idc.ls.wisc.edu/guides/using-artificial-intelligence-in-the-classroom
Smith, V. D. & Darvas, J. W. (2017). Encouraging student autonomy through higher order thinking skills. Journal of Instructional Research, 6, 29-34. https://eric.ed.gov/?id=EJ1153306
Sotiriadou, P., Logan, D., Daly, A., & Guest, R. (2019). The role of authentic assessment to preserve academic integrity and promote skill development and employability. Studies in Higher Education, 45(11), 2132–2148. https://doi.org/10.1080/03075079.2019.1582015
Stanford Center for Teaching and Learning. (2023, June 19). Pedagogic strategies for adapting to generative AI chatbots. Stanford Teaching Commons. https://teachingcommons.stanford.edu/news/pedagogic-strategies-adapting-generative-ai-chatbots
Tai, J., Mahoney, P., Ajjawi, R., Bearman, M., Dargusch, J., Dracup, M., & Harris, L. (2022). How are examinations inclusive for students with disabilities in higher education? A sociomaterial analysis. Assessment & Evaluation in Higher Education, 48(3), 390–402. https://doi.org/10.1080/02602938.2022.2077910
Teaching + Learning Lab. (n.d.-a). Rethinking your problem sets in the world of generative AI. Massachusetts Institute of Technology. https://tll.mit.edu/rethinking-your-problem-sets-in-the-world-of-generative-ai
Teaching + Learning Lab. (n.d.-b). Teaching & learning with ChatGPT: Opportunity or quagmire? Part III. Massachusetts Institute of Technology. https://tll.mit.edu/teaching-learning-with-chatgpt-opportunity-or-quagmire-part-iii
Usable Knowledge. (2016, September 11). Intrinsically motivated. Harvard Graduate School of Education. https://www.gse.harvard.edu/ideas/usable-knowledge/16/09/intrinsically-motivated
Advantages of using AI in education
Author: Alan Hilsabeck
✅ Critical Thinking is Activated – Instead of relying on AI for shortcuts, students are asking better questions, analyzing AI outputs, and debating accuracy—skills essential for the workforce.
✅ Participation Has Skyrocketed – Students who were once hesitant to engage are now leading discussions, collaborating with peers, and confidently presenting ideas backed by AI-supported research.
✅ Engagement is Higher Than Ever – When students see AI as a tool, not a crutch, they actively explore new ideas, refine their reasoning, and produce more thoughtful work.
✅ Real-World Readiness is Growing – Instead of fearing AI, they’re learning how to leverage it ethically—a skill that will set them apart in future careers.
This isn’t about replacing traditional learning—it’s about enhancing it. AI is a catalyst for deeper thinking, problem-solving, and creativity. The key isn’t banning AI—it’s teaching students how to use it the right way.
Teachers who embrace this shift aren’t just preparing students for tests—they’re preparing them for the future.
Beyond Policing: AI Writing Detection Tools, Trust, Academic Integrity, and Their Implications for College Writing
Resource: ResearchGate January 2025
