Last semester, I assigned students in my energy storage systems class a problem set comparing the electrical designs of supercapacitors, lithium-ion batteries and flywheel systems.
One submission stopped me cold. The formatting was polished. The structure was logical. And the technical content was confidently, elaborately wrong. Clearly, ChatGPT had hallucinated charging-discharging characteristics and invented efficiency ratings that appear in no published literature.
The student was not trying to cheat. He told me, quite openly, that he had used AI to help him understand the material. He believed the output was correct. He had no framework for questioning it.
I have thought about that moment constantly since, because it captures the central failure of how STEM higher education is responding to generative AI. We are focused on the wrong problem: We are asking how to stop students from using AI when we should be asking why they cannot tell when it is lying to them.
What Is Actually Happening in STEM Classrooms
Here is what I observe at the University of North Dakota: Students are using generative AI constantly, across every course, regardless of what the syllabus says. They use it to draft lab reports, check homework solutions, generate study guides and explain concepts they did not grasp in lecture. Some are transparent about it. Many are not. Almost none know how to evaluate what the AI gives them.
The scale of AI usage is not unique to my campus. A recent large-scale survey of students across the California State University system found that more than half are using AI tools on a consistent, ongoing basis. An April Lumina Foundation/Gallup poll similarly found that 57 percent of U.S. college students use AI in their coursework at least weekly, with students in business, technology and engineering programs reporting the most frequent use.
This is not an academic integrity crisis. It is a literacy crisis. In my experience, the students using AI most recklessly are not the lazy ones. They are often the most diligent: students who genuinely want to learn, who turn to AI the same way previous generations turned to YouTube tutorials or study groups. The difference is that YouTube tutorials on Kirchhoff’s laws were made by humans who (one would hope) understand Kirchhoff’s laws. ChatGPT does not understand anything—it produces statistically plausible text. In STEM fields, statistically plausible and physically accurate are frequently not the same thing, and our students do not yet know how to tell the difference.
Why Banning AI Is the Wrong Response
Across STEM programs nationwide, the dominant institutional response to AI has been restriction. Many syllabi now include blanket prohibitions, and institutions have hastily updated their academic integrity policies.
I understand the impulse, and I share the underlying concern.
In STEM fields, errors can be catastrophic: a miscalculated load on a bridge, a wrongly specified battery management system, a misinterpreted drug interaction. These are not abstract risks. Faculty are right to worry that students who outsource their thinking to AI will graduate without the deep understanding that keeps people safe.
But the ban strategy has three fatal problems.
First, it is unenforceable. AI-detection tools are unreliable and will only become more so as the technology improves. Even OpenAI shut down its own AI detector, citing poor performance, and a 2023 study from Stanford University researchers found detectors misclassified 61 percent of essays by nonnative English speakers as AI-generated, a serious equity concern given that international students make up a significant share of STEM graduate enrollment.
Second, it is dishonest about the profession we are preparing students for. CAD platforms like PTC Creo and SolidWorks are integrating AI-driven generative design and intelligent assistants, the Massachusetts Institute of Technology is developing AI agents that operate CAD software like a human engineer, and energy-modeling tools are embedding machine learning at every level. My colleagues in geothermal energy research are already using physics-informed neural networks to model subsurface heat transfer. Biotech firms use AI to predict protein structures. The workplaces our graduates will enter do not ban AI. They expect competent use of it.
Third, and this is the one that keeps me up at night: The ban approach abandons students to exactly the danger we fear. When we prohibit AI without teaching students how to evaluate it, we guarantee that their first real encounter with AI-generated technical content will happen unsupervised, in a professional setting, with real consequences. We are not protecting them. We are deferring the risk to a context where the stakes are higher.
We Haven’t Figured This Out Yet, but Here Is Where to Start
I want to be candid about something that most opinion pieces on AI in education gloss over: We do not yet have a proven pedagogical model for integrating AI into STEM coursework. I certainly do not have one. What I have is a growing collection of classroom observations, a set of instincts shaped by teaching students who are already neck-deep in this technology and an increasing conviction that the status quo is failing them.
That said, we do not need a grand framework to start doing better. Here are five low-cost, feasible steps that any STEM instructor could try tomorrow.
- Dedicate one class session to breaking down an AI output. Pick a core concept from your course, prompt ChatGPT with it live in front of students and walk through the output together. Where is the AI right? Where is it subtly wrong? What did it assume without saying so? This takes no new technology and no extra grading. It takes 50 minutes and a projector.
- Add one line to your AI policy: “If you use AI, submit the prompt and the output alongside your work.” This shifts the dynamic entirely. Students who use AI carelessly will expose it themselves. Students who use AI thoughtfully will demonstrate critical engagement. You learn what your students are actually doing, which is information you currently do not have.
- Build “find the error” into your coursework. Generate an AI response to a technical problem from your course, one with plausible but incorrect reasoning, and ask students to identify and explain the errors. Use it as a homework exercise or reframe one question on your next exam. No new material needed, just a different framing.
- Run a five-minute anonymous AI survey in your class. Ask three questions: Are you using AI in this course? What for? What confuses you about it? You will learn more from those answers than any detection software will tell you all semester. Students are not hiding anything. They simply have no guidance.
- Make the prompt the assignment. Instead of asking students to solve a problem, ask them to write the best possible prompt that would get an AI to solve it correctly, then evaluate the output. This reveals understanding in a way traditional assignments cannot: A student who does not grasp the underlying physics cannot write a prompt that accounts for boundary conditions, unit conversions or material constraints. The deliverable is threefold: the prompt, the AI output and a brief assessment of what the AI missed.
None of these steps require institutional buy-in, budget approval or a published pedagogical study. They require a willingness to engage with the reality of our classrooms instead of the fiction printed on our syllabi.
A Call to My Fellow STEM Faculty
We are the right people to solve this problem. STEM disciplines already teach the exact skills that responsible AI engagement requires: model validation, uncertainty quantification, sensitivity analysis, distinguishing correlation from causation. We do not need to import a new pedagogy. We need to recognize that AI literacy is a natural extension of the scientific and engineering rigor we already value.
That student with the beautifully wrong energy-storage submission did not need a plagiarism charge. He needed what every student in a rapidly changing technical field needs: an instructor willing to say, “Here is how you know this is wrong, and here is why that matters.”
