
In mid 2025, GovTech reported that graduate students at the University at Buffalo protested the use of Turnitin’s AI detection tool in academic integrity cases. The article describes students facing potential academic sanctions after the tool flagged their work, including at least one case where a student was told she could not graduate until the matter was resolved.
This made me pause.
The university said it does not rely solely on AI detection software when adjudicating cases and that instructors must have additional evidence to meet its standard of proof, with review and appeal processes in place. One student also said Turnitin’s score was the only evidence she was presented with while under review, and raised concerns about checks, balances, and consistency in how the tool is used.
Around the same time, contributors writing in the Guardian’s letters section argued that there is no simple solution via AI detectors. One contributor cites a study reporting detector accuracy under 40% overall and 22% in adversarial cases, and argues that because AI leaves no trace it can be almost impossible to definitively show AI use without admission.
Taken together, these examples suggest a governance problem rather than a single institutional failure. Automated judgments are being introduced into high-stakes processes, and institutions are still working out what standards of evidence, transparency, and appeal should look like.
If this dynamic is already visible in higher education, it raises wider questions about how similar automated decisions might be handled in the future as such systems spread into hiring, credit, or public services.
Curious how others here think appeal and oversight should be designed when automated systems are involved in consequential decisions.
https://www.govtech.com/education/higher-ed/university-at-buffalo-students-protest-use-of-ai-detection-tool

12 Comments
Submission Statement:
This post looks at early evidence of how automated decision tools are being incorporated into university disciplinary processes and asks what this suggests for future governance. As AI systems are increasingly used in high-stakes institutional decisions, the post invites discussion on how appeal, transparency, and oversight mechanisms should evolve to keep pace.
Students mustn’t use AI for your studies!
Workers, use AI or lose your job!
honestly, it means we’re going to need clearer policies around what “ai-assisted” vs. “ai-generated” actually means. right now most detection tools have false positive rates that are way too high to be used as sole evidence. universities will probably need human review processes and maybe even shift towards assessment methods that are harder to automate, like oral exams or project-based work that requires process documentation.
There are a lot of examples of AI Detection Tools not working correctly. I’ve read plenty of stories where students who actually written their reports had the AI saying it was written by AI. A couple of times was due to the teacher not using the correct settings. One of the times a student proven that the software wasn’t working to the teacher by copying and pasting the text from the Declaration of Independence into the software.
The whole governance issue with automated AI detectors like Turnitin is just scary – especially when the process is so opaque, and sanctions hit before students even get clear info. I get why people are protesting. Honestly, detector scores shouldn’t be the only evidence in academic integrity cases, and right now, so much of it comes down to luck or the mood of the admin.
I’ve always cross-checked my stuff with different tools, but it’s wild how much variance you see: gptzero, Copyleaks, Turnitin – then there’s Quillbot and sometimes AIDetectPlus thrown in too. Some flag nearly everything, others barely blink. Makes me feel like we need some independent appeals panel, plus clearer standards for what kind of evidence is *really* enough. You ever see anyone successfully fight an AI flag purely with their drafts or revision history?
If this kind of automated judgment starts creeping into hiring or credit, I just hope there’s more transparency and real oversight. Otherwise, lots of people could get flagged for things that aren’t even on them. What happens if these tools start making decisions about your resume, your loan, or your application and nobody can challenge it?
Also I read the Guardian piece – crazy how even experts can’t fully trust these systems. The whole thing feels like it’s moving way faster than the solutions can keep up.
Curious which detector people actually *trust* the most (if any) and what appeals process actually protects the accused in your experience.
This is the wrong way to fight the use of AI in education. It’s universities being too stuck in their old ways.
The correct way is to only use AI to detect things like accurate quotes, factuality of assumptions, and a review (but not final decision) for plagiarism (cause it needs a human confirmation).
Grading should include an in-person review, having the student present and/or defend the results of their work.
Vibe coded software determining the fate of people? What could go wrong?
What happens in universities usually follows into the public 5 or 10 years later, so buckle up everyone!
Turnitin suck ass. It has since the early 2000’s. Garbage software, garbage company. Required to use it, and by using it you gave them a forever license to anything you submitted.
AI writing detectors cannot reach a reliability level where the results can be applied, reasonably, to academic integrity evaluations.
Universities have several different challenges on hand as well. The first one is that a student that doesn’t know how to leverage using AI on their field of choice is already under prepared for the job market. So asking students to never use AI doesn’t solve the problem. Evaluating students, and using AI or not for the purpose of evaluating also creates more challenges.
On top of those, there is a problem manifesting at a higher degree (pun intended): students usually take out loans to be able to afford higher education. The bank provide those loans because, so far, people with diploma could be reliably expected to earn enough to pay back those loans. If there’s a shake up in the job market cause by AI, whatever it is, even if it’s temporary, unreasonable, etc, that could cause banks to see too much risk in student loans. Students themselves might not want to assume the risk of getting a loan if that higher pay doesn’t materialize after graduation, and I don’t see higher education institutions somehow lowering their fees drastically to become more attractive, especially because costs which include staff and faculty can hardly come down unless they pay those people even less, which is unlikely to happen.
Whatever happens, banning AI use will not solve any problems, even if encouraging the proper use doesn’t work as well.
Now imagine yourself as a student, getting into debt after a difficult to get loan approval from the bank, looking at your future job market in tatters, not learning everything you need to join the horrible market, which now requires AI related skills, and possibly not even graduating because your work which did not use AI is incorrectly flagged by an unrrliable system.
In 2003, I went back to college to get a second degree. Online as I worked full time.
Even then they put papers through a plagerizing system, so this is not a new use case, just different technology.
It was often wrong, and only once did I have a “professor” question me about a paper.
I looked at it. “It stripped out the quotation marks. Open the original submitted”
Sure enough, the quotes were there.
Instructors should not be allowed to make a decision on their own. It should go to a review panel who independently reviews it with the student and *an advocate* to determine what was done or not.
Alternatively, use word processors that track changes like Word. Cut and paste looks different than typing and editing. A little extra coding and it could conclusively show if text was cut and pasted, or if typing patterns changed.
For example, if I track typing speed, accuracy, backspaces, word deleteions, inserts, spelling corrections and autocorrections it would be clear if suddenly I had no errors in a paragraph that I probably did not write it.
Require a word processor that has such auditing tools. Hell… sounds like a million dollar idea… write a word processor with built in key by key, mousing and cut-and-paste tracking and playback.
Could it be defeated? Maybe, but probably not worth the effort.
Fuck AI, its not ready for literally ANYTHING outside a glorified chatbot. There’s zero real use for it otherwise.
AI models fine tuned on their owners own human style of writing will completely invalidate this.