top of page

I'd like to 100% prohibit LLMs, but I refuse to police AI use in my academic writing & writing-intensive courses

Updated: 6 days ago

So, here's what my course policy is, and how it's working.


Excerpt from syllabus which states a course policy about AI/LLM use. Text reads: ACADEMIC INTEGRITY Participating regularly in discussions and staying up to date on coursework is an important aspect of academic integrity. In addition, you must also follow UW’s Academic Honesty Code (UW Regulation 2-114), which prohibits acts of plagiarism. For the purposes of this course, plagiarism is presenting the writing, images, or other intellectual property of others as one’s own without appropriate permission, attribution and/or citation. Just as you cite written sources, you are expected to attribute images with the same diligence. If you have questions about how to credit and/or cite sources and images in your work, please contact me; I’m happy to help you.  Note: Academic dishonesty includes anything that represents someone else’s ideas as your own without attribution. Representing someone else’s work as your own is intellectual theft – stealing – and includes (but is not limited to): unapproved assistance on exams plagiarism (use of any amount of another person’s writings, blog posts, images, publications, and other materials without attributing that material to that person with citations, including ideas conjured up by Large Language Models, aka AI such as ChatGPT, or AI-generated images, etc.) fabrication of referenced information (which LLMs like ChatGPT are well-known to do frequently).  Facilitation of another person’s academic dishonesty is also considered academic dishonesty and will be treated identically.  To read full policy, read the full blog post.
Screenshot of an excerpt from my AI/LLM policy for my spring 2025 scicomm course. To read the full policy, scroll down in this blog post to the subheader “my AI policy.”

A colleague at another institution recently asked me how I was handling AI in my courses. He knows I teach writing and writing-intensive courses, so it was a fair bet that I have opinions.


GenAI/LLM concerns

What he perhaps did not know is that this is the stickiest topic in the writing mentorship book I co-wrote with Stephen B. Heard (pre-order now! use UCPNEW for 30% off!). While we navigated many philosophical, stylistic, and practical differences, LLMs are the mole hill Steve and I could have easily built into a mountain.


Steve has written a fair bit about his (mostly positive) stance on AI/LLM use in writing, STEM training, etc. I have only written one thing until now—and it was a firm critique of Chat-GPT’s incompetence. You’ll have to read our book to see how we resolved our differences!


But, something we didn’t get into in the book is that I'm on the "avoid LLMs/AI" side of the fence. I have a lot of concerns. Two of them include (a) stealing other people's intellectual property to train LLMs and (b) how that teaches students to severely devalue the intrinsically human activities of creativity and communication. (And I plan to write more about that at a later date.)


Anti-AI policing

In the meantime, I realize that I can't effectively prohibit students from using LLMs in my classes without policing for said use. I actually refuse to do that. I didn't go into teaching to be a patroler or adversary to students. [1] Moreover, there are loads of false-positives in so-called plagiarism checkers, and the equivalents for LLM detection are equally suspect. [2] Don’t get me started on using LLMs to provide writing/assignment feedback or correspond with students. 😤


Limited institutional guidance

At the same time, my institution doesn't have a universal policy on AI use in courses. However, Academic Affairs offers instructors four syllabus template options re AI:


"AI Technology: We recommend that faculty include a section focused on permitted/unpermitted AI technology use in each of their syllabi, generally in the location of their Student Academic Dishonesty statement. Additionally, it is important that faculty clearly communicate their expectations of course collaboration policies (with other students) in this same area.We offer the following language as draft material (adapted from University of Delaware) that instructors may want to consider.Option 1: Use prohibitedStudents are not permitted to use advanced automated artificial intelligence or machine learning tools on assignments in this course. Each student is expected to complete each assignment without substantive assistance from others, including automated tools.Option 2: Use only with prior permissionStudents are permitted to use advanced automated artificial intelligence or machine learning tools on assignments in this course if instructor permission is obtained in advance. Unless given permission to use those tools, each student is expected to complete each assignment without substantive assistance from others, including automated tools.Option 3: Use only with acknowledgementStudents are permitted to use advanced automated artificial intelligence or machine learning tools on assignments in this course if that use is properly documented and credited. For example, text generated using ChatGPT-3 should include a citation such as: “Chat-GPT-3. (YYYY, Month DD of query). “Text of your query.” Generated using OpenAI. https://chat.openai.com/” Material generated using other tools should follow a similar citation convention.Option 4: Use is freely permitted with no acknowledgementStudents are permitted to use advanced automated artificial intelligence or machine learning tools on assignments in this course; no special documentation or citation is required."


None of those felt adequate to me on their own.


My “use of AI in coursework” policy

So, here's my course policy, which I embed in the Academic Integrity section of my syllabi.3 I’m providing it as a link so this post doesn’t get horrendously long. The short story is that I require students to discuss with me in advance any LLM/AI use they’d like to do in the class, and get my permission for it. If they succeed, they’d also need to cite it. [4]


How’s this policy working?

  1. No students have asked me for permission to use LLMs or any other AI. Zero. So, that aspect isn't working.

  2. I do not run students' work through any sorts of "checker" apps/programs. See above re false positives and not policing students.

  3. Even so, I could tell that several students in spring 2025 did use LLMs/AIs without securing permission. The "tells" included things like:

    1. The super-saturated, "utopia"-style images generated by a lot of the current visual AI things.

    2. Images that were way too perfectly aligned with the student's topic/content to be anything but a custom image [5], and I know (and specify) that students don't spend money to complete course projects. So, if they didn't commission an illustrator to create the image, then it clearly came from AI.

    3. Writing that analyzed itself within the text.

    4. Writing that was quite circular or even repetitive, while still being pretty clearly written.

    5. Writing with zero grammatical or spelling errors, but content errors or unclear "thinking." (Usually, a writer of any skill level is going to polish ideas before grammar/spelling. This is particularly true for developing/under grad writers, who are often dashing out a single, first/rough draft right before a deadline.) [6]

    6. Writing skill/voice/tone that changed abruptly partway into the semester.

  4. For students who use this technology without securing permission, I am reducing points on their grades for those assignments. (Most of the time, I just do complete/incomplete grades, as I want them to experience the extensive writing in my courses as a skill building process, not an excessive number of "exams.")


I recognize that there are folks who are a lot more AI-permissive and even build it into their assignments, but this is where I land with it after loads of discussions, lots of reading, and 20+ years of teaching experience.


How about you?

Regardless of my stance, I think we should absolutely be talking about this as a key philosphical and applied aspect of being academics. I'm curious: what is your AI/LLM policy in your courses, and how did you settle on it?


[1] One of my favorite, recent-ish books on this theme is Radical Hope: A Teaching Manifesto by Kevin Gannon. It’s a short book very much worth your time.


[2] For one thing, a lot of the LLM “detectors” claim that using an em dash—a long-standing, essential part of English-language writing, and one of my favorites—is a sure sign of LLM writing. That’s preposterous. It’s categorically wrong. And it unhelpfully reduces people’s understandings of the craft of writing to searching for specific types of punctuation. (That—reducing writing to line editing and punctuation policint— is a topic that Steve and I cover repeatedly and at length in our book!)


[3] You can view all my course syllabi here.


[4] I just recently came across this model as a tool for coaching students in their discussions/disclosure of using an LLM. I haven’t assigned it yet (I’m on sabbatical, so not teaching for the next two terms), but I probably will work it in next time I teach.


[5] I used to partially make my living as an illustrator, so I know a custom image use-case when I see one.


[6] Recognizing, embracing, and working through the stages of writer development and the development of a single piece of writing are two of the key ideas/tools I provide (with Steve) in our forthcoming book! You can pre-order Teaching and Mentoring Writers in the Sciences from University of Chicago Press now! (Use UCPNEW for 30% off!)


P.S. You still have time to get 30% off of Teaching and Mentoring Writers in the Sciences! Just use the code UCPNEW. This is a labor of love I co-wrote with Stephen Heard to help folks in the sciences connect with the 50+ years’ of research on how to effectively teach writing. It comes out from University of Chicago Press on November 18th!

Comments


commnatural sciencecommunication research & practice Bethann Garramon Merkle

© 2025 by Bethann Garramon Merkle.

bottom of page