Everyone at Tufts seems to have a metaphor for AI.
One professor compared humans who religiously use ChatGPT to ‘barnacles eating their own brains.’ Another described AI as a “billion-dimensional glider,” an aircraft that transports people through new realms of discovery and innovation. Yet another envisioned it as a “fairy,” ready at a moment’s notice to wave its wand and solve students’ problems.
What’s with all the metaphors? AI represents infinite possibilities: optimism and pessimism, progress and backsliding, a supplement to human learning and the erosion of human intelligence. From ChatGPT to Claude to Gemini, AI technologies are constantly evolving. Perhaps we find it easier to think about AI in abstract, representative terms because we can’t quite put our finger on what it is yet. Now more than ever, we are grappling with a technology that continues to divide, perplex and excite us.
Tufts is no exception. As a university that prides itself on intellectual curiosity, inquiry and creativity — on being the epitome of a tried-and-true liberal arts education — is there a place for AI in the classroom here?
James Intriligator, a professor of mechanical engineering, certainly seems to think so. As he shared his screen on Zoom to show off new ChatGPT prompts he was experimenting with, I could practically feel his excitement emanating from the screen.
“I think of [AI] like flying an airplane. It’s a whole new human experience … it opens up minds. It creates new avenues of exploration. It creates possibilities and intersections that people miss,” he said. “Realistically, pretty much every job is going to be impacted by [AI] and I think the university owes it to the students to at least get them conversant in it.”
In his free time, Intriligator uses ChatGPT to create new languages, brainstorm policymaking strategies for politicians and even troubleshoot the broken cigarette lighter in his car. He frequently incorporates AI into his homework assignments. In one of his classes, students must use ChatGPT to brainstorm design improvements for various consumer products, documenting AI usage in an appendix.
Listening to Intrilligator speak, the skeptic in me couldn’t help but cry out: What about human creativity? Doesn’t asking AI to brainstorm these ideas mean that students aren’t exercising their own potential for original thought?
“I guess I think of it as a new form of thinking and brainstorming and creativity,” Intriligator said. “You still have to be creative and innovative and add the human touch to it, but you can do it at a larger level and faster. … It will take care of the number-crunching, detaily things that you don’t like doing.”
Like Intriligator, James Murphy, a mathematics professor, encourages students to use AI for certain assignments, especially homework that involves coding. Typically, Murphy explained, about half the students in his “Probability” and “Statistics” courses have no prior coding experience — a skill that can take rigorous time and practice to master. Allowing students to use ChatGPT helps break down that learning curve.
“Is there a risk that you’re losing a valuable learning experience by asking the machine? Potentially, yes,” Murphy said. “But I think that, at least with [coding], the trade-off is net favorable, because I had a lot of students who just never were able to even get to [a basic level of coding] in the course of my one-semester class, because they had never coded before.”
Yet as valuable as ChatGPT can be, Murphy is not that optimistic about students’ and professors’ ability to use AI tools with discipline. The temptation to turn to AI can be nearly impossible to resist, Murphy said, meaning more and more of his math students may use it as a substitute for putting pen to paper and developing core mathematical skills on their own.
“I don’t think the answer is to say, categorically, ‘You can’t use it,’” Murphy said.“Part of what you have to do in math is struggle. You have to work on hard problems for at least some number of hours. … I worry that if you can just go straight to [AI] and ask it for the answer, then you lose that experience.”
One of the challenges of incorporating AI into education is that each academic department at Tufts is impacted differently. AI technologies may naturally supplement the technical, data-based demands of STEM fields, but in the humanities, they are often seen as cataclysmic threats to the creative writing and critical thinking that the discipline is rooted in.
Jess Keiser, a professor in the English department, described AI as a “cliché machine” that takes students’ inputs and produces wishy-washy writing and ideas. As I stepped into his office in East Hall, I was met by a floor-to-ceiling bookshelf teeming with literature that he and his students dissect in courses such as “The Paranoid Imagination” and “Of Microscopes and Monsters.”
“[AI is] really good at just churning out fine prose that is essentially meaningless and thoughtless,” Keiser said. “In any kind of writing, there’s some of that — there’s throat clearing. But ideally, the writing and thinking one would want to do and see in a literature classroom is going to be more meaningful, thoughtful – not just boilerplate.”
If there’s any use for AI in the classroom, Keiser told me, it’s as a negative example of the writing style students should avoid. At the same time, he acknowledged that the quality of AI-generated writing has improved exponentially in recent years. AI may be a “cliché machine,” but Keiser argues it has become nearly impossible to tell whether a piece of writing was produced by a student or ChatGPT.
The English department is going to have to do a “serious rethinking” about what future assessments will look like, Keiser said. In some of his classes, he has already begun assigning more in-person exams as a way to prevent students from using ChatGPT. But for Keiser, the most effective response to the inevitability of AI might be even more drastic: “going medieval.” He envisions the creation of “writing labs” around Tufts, where computers are replaced with typewriters and students sit for several hours, clacking away, alone with their thoughts.
Jody Azzouni, a philosophy professor, has also returned to in-class exams. If a student wants to write an essay, they must meet with him multiple times to discuss their thesis, giving Azzouni a way to ensure students have wrestled with ideas themselves as opposed to offloading their thinking to a chatbot.
“What I’m trying to do is create a class that’s beneficial to the student and is good for their brain health, too,” Azzouni said. “If there’s somebody who’s intent on gaming that — well, fine. I’m not a policeman. I’m gonna set things up as best I can, so that those who can profit by it in a healthy way will.”
These days, Azzouni kicks off his classes with a presentation titled “Your Brain on ChatGPT,” riffing on the ’80s “This is Your Brain on Drugs” commercial by Partnership for a Drug-Free America. I reviewed the presentation before our interview, startled by its slides with evidence that ChatGPT reduces neural connectivity and ability to remember passages we just read.
And yet, “This is Your Brain on Drugs” has become a meme in modern times, widely parodied and even said to have nudged more teens toward trying drugs. I couldn’t help but wonder if anti-AI campaigns will meet a similar fate.
In the humanities, where many professors have adopted firm stances against AI, Ester Rincon Calero stands apart. A professor in the romance studies department, she has embraced a hybrid approach: Students may use AI, but they must demonstrate they are still practicing writing, reflection and critical thinking.
“I think a minority of people are doing what I’m doing, but I love technology. I have always used technology,” Rincon Calero said.
Rincon Calero sees plenty of benefits to incorporating AI into her classes. For students who are too shy to attend office hours or speak to her after class, AI can answer their questions instead, while offering “unlimited” opportunities for feedback as they write essays or study for exams.
“Like everything, a lot of resistance [to AI] comes from the fact that there is a learning curve,” Rincon Calero said. “The faculty development is crucial. … For some people who are not familiar with technology, [AI] is daunting.”
In one of her Spanish poetry classes, students have the option to use the AI songwriting app “Suno” to create a song based on a Spanish poem and then write an essay reflecting on whether the result is representative of the poet’s style. She also encourages her students to use an AI platform called “Rumi” for feedback on their writing, as long as they incorporate the corrections by hand.
Because of the prevalence of AI, Rincon Calero believes students’ ability to demonstrate their knowledge ‘on the spot’ is more valuable than ever before.
“If you have used AI to prepare, I don’t mind, but you’re going to have to talk about it, which means your brain is going to have to process that information and discuss it in class,” she said. “I have increased dramatically [the percentage of] the final grade [composed of] participation in class.”
AI can be incredibly enticing, its allure almost magnetic. I often feel that pull myself. Even as I write this article, I am tempted to open ChatGPT, knowing a little AI assistance would give me more time to work on job applications, edit my essay due tomorrow or hang out with my friends as the end of senior year creeps closer.
Among my fellow Gen Zers, the refrain, “Just ask Chat” is increasingly common as AI becomes more ingrained into every facet of our lives. Peer out across the sea of laptops in a lecture hall, and you’ll spot at least a handful of screens open to ChatGPT, with students often asking it to answer questions raised during lecture. Like professors, students remain divided about whether to embrace AI with open arms, steer clear of the technology altogether or try to find a balance between these two extremes.
Junior Cecile Thomas, an English and psychology major, remains firmly anti-AI. As a writing fellow, she is particularly concerned that ChatGPT may homogenize students’ writing styles.
“ChatGPT standardizes your voice and can inhibit some of that personality [from coming] through in your writing,” she said. “I feel like a lot of people I talk to are like, ‘I might as well embrace it, because it’s the reality.’ I personally don’t subscribe to that, because it’s your reality if you want it to be. I’m not choosing to make it my reality.”
Other students, especially those in STEM fields, see ChatGPT as a useful resource that can help them better understand class material while also easing their workload.
“[AI] is so convenient. You can upload your whole file into it. It can detect your handwriting. It’s so freaking smart that it’s honestly kind of scary,” junior Alexa Santa Cruz said.
For Raydris Espacia, a junior and chemistry major, a typical semester involves multiple six-credit classes, about 60 pages of reading per night and extensive work for her labs and recitations. Like many students, it sometimes just gets to be too much — which is where ChatGPT can step in.
“With all that combined … you don’t have the brain power to constantly be keeping up with assignments and lab reports,” she said.
It’s true that Tufts students tend to be the workaholic type, piling on as many classes and extracurriculars as their schedules can bear. But the workload is not always self-inflicted, Espacia explained. Often, the way professors structure their classes can increase the feeling of confusion that leads students to use ChatGPT as a crutch. For example, Espacia pointed to “flipped classroom” courses, a class structure where students watch online lectures for homework and then use class time for practice problems.
“A lot of people that I know struggled with not having the professor teach us in person [and] not being able to ask questions in person,” she said. “I feel like sometimes I’m not getting taught properly, and then it’s up to me to self-study. …There’s a lot of work that is expected outside of class that I don’t think professors are realizing is more work than they think.”
When I asked my interviewees what Tufts students use AI for the most, I was met with a range of responses. Students feed their lecture notes into ChatGPT to generate practice exams. They ask it to synthesize dense readings. ChatGPT solves students’ homework problems, clarifies ideas that the professor did not fully explain during lecture and even completes take-home quizzes.
AI is also increasingly becoming a substitute for office hours. Senior Will Soylamez, a teaching assistant for a computer science course, said that he and his fellow TAs are seeing more AI-generated homework submissions and fewer students showing up to office hours for support.
“I think in a weird way, [ChatGPT] doesn’t feel like cheating, necessarily. We all know we’re not supposed to use it, but … somehow with ChatGPT … it feels like a softer line,” Soylamez said. “I think that’s honestly part of the reason so many people use it.”
No matter their major or their stance on AI, however, the students I spoke with emphasized that it must be used with moderation.
“I personally don’t mind students using it, especially with facilitating their learning, but ask questions in a smart way, minimize your impact and try your best to get [answers] for yourself,” Espacia said.
Santa Cruz, who is studying mechanical engineering, said many of her professors encourage AI usage both inside and outside the classroom. But that encouragement can sometimes have a reverse-psychology effect: The more professors allow the use of AI, the more students want to prove they can complete assignments on their own.
“A lot of our professors, especially my professors, are like, ‘Chat[GPT] is really great,’ but at the end of the day … you need to understand these basic fundamentals,” Santa Cruz said. “What’s the point of going to school if you can’t formulate your own thoughts?”
AI policies across Tufts’ academic departments exist as a patchwork of complex, often contrasting approaches. Some professors, like Intrilligator and Rincon Calero, are embracing AI and directly incorporating it into assignments. Others, like Azzouni and Keiser, are going to great lengths to wipe any traces of AI from their courses. As a result, walking between two classrooms on Tufts’ campus can mean encountering entirely different policies on AI usage and what a professor counts as plagiarism.
How should Tufts students navigate this kaleidoscopic landscape of AI rules? That concern was the impetus behind the creation of a new AI Taskforce, composed of approximately 30 faculty members from Tufts’ four schools. The Taskforce meets once a month to discuss the role of AI at Tufts and develop shared guidelines regarding AI use, which will eventually be distributed to professors across the university.
“The real gist of [the AI Taskforce] was to try to bring people together to understand the ways that AI is impacting our work, but also to build capacity, to think about how [we can] work together,” Carie Cardamone, a member of the taskforce and an associate director at the Center for the Enhancement of Learning and Teaching, explained.
For now, the AI Taskforce’s guidelines will not be binding; instead, they’ll function as a set of suggestions that different departments can mold to their liking. Cardamone hopes departments will ultimately create shared “buckets” of policies — sets of policies a professor might adopt in a given department.
“There’s a shared understanding from all of our conversations around the fact that we want guidelines and not policies, and we want them to be broad and allow for academic freedom within space, but give us a scaffolding of common language and common considerations to understand when we’re making those choices,” Cardamone said.
One guideline under consideration would require professors to set crystal-clear expectations in their syllabi about how students may use AI. Rather than just prohibiting students from using AI, professors will be expected to explain why. For students like Santa Cruz, it’s useful when professors are specific about the types of assignments where they permit AI usage versus assignments where it is prohibited.
“I personally appreciate having the ‘okay’ and ‘not okay’ kind of structure,” she said.
There is a fine line between allowing professors to maintain autonomy over their classroom while also creating academic policies that remain consistent across the university. It’s a balance Tufts will have to weigh carefully as more departments chart their own paths forward.
AI’s radical transformation of our society is undeniable. It is everywhere: in our Google searches, in our email auto-complete suggestions, in the deep-fake videos saturating our X feeds. We may not yet know what directions it is pulling us, but in the weeks, months and years to come, no one on Tufts’ campus will be able to avoid difficult conversations about how much we want to let this new technology into our lives.
Many professors view incorporating AI into the classroom as a responsibility they owe to their students, who are entering a precarious job market in which many employers will demand AI fluency. The Tufts Career Center agrees, encouraging students entering the workforce to “view AI as a tool, rather than an adversary.”
For Rincon Calero, exposing her students to AI is meant to prepare them for life after graduation.
“The reality [students are] going to face when they leave Tufts and they go to a job is that they’re going to have to use AI to be more efficient. If they cannot be more efficient, then they’re going to struggle,” she said.
Is that enough of a reason for even the most anti-AI student or anti-AI professor to consider giving in?
Where does Tufts — and higher education overall — go from here? Will AI lead to the cataclysmic, earth-shattering educational shift that many theorists predict, or is it perhaps all a bit overhyped?
Will an AI-driven society make us collectively realize the beauty of being human, of the creativity and hard work and thought processes that an algorithm might be able to simulate but that only ‘we’ are able to feel, viscerally, down to our bones?
I left with more questions than answers.
