Faculty and staff convene to discuss the risks and community impact of artificial intelligence (Sarah Peters | Sarah Peters Photography)
According to a 2024 survey, roughly 85% of Tulane University students use Artificial Intelligence tools regularly for school. The increasing prevalence of AI has altered the academic landscape and changed how students learn.
Students at Tulane recognize this shifting learning environment, and many have incorporated AI into their study habits. Gavin Mack, a first-year student at Tulane, said his attitudes towards AI have shifted.
“I always thought AI was a little sketchy,” Mack said. “But recently, it’s been way better, and I’ve been using it mostly for creating practice questions for me, preparing for my midterms. And it’s worked pretty well.”
Tulane student Grace Dunning said that while she does not use AI often and prefers to stick to her own method, she uses ChatGPT mostly academically for help with homework, studying or making planners.
In addition to their own experiences incorporating AI into their studies, Mack said he has concerns about the misuse of AI in the classroom.
“I feel like when it comes to trying to understand the material, I don’t care, as long as you figure out what you’re doing. I think it’s perfectly fine,” Mack said. “My issue comes with doing classwork with it because it’s low effort. You don’t learn anything from it, and it kind of undermines the work everyone else puts into it.”
The prevalence of student AI use has led many professors to adapt their curricula.
Julia Lang, professor and director of the Phyllis M. Taylor Center for Social Innovation and Design Thinking, said that student use of AI has altered how she creates assignments and coursework.
“I no longer assign things where I say, ‘Read this article and write a discussion post on it,’ because while I can hope students will do that, that’s such an easy thing to outsource to AI,” Lang said.
Mark Shealy, professor of English, similarly has noticed how AI is taking away the role of research and critical thinking in students’ learning process.
“I told my students today, ‘a third of your answers are all the same, maybe you’re using different models … [and] you’re going back and tweaking it here and there, but [you’re] doing the same standard thinking,’” Shealy said. If he asked students “stand in front of the class and tell us what you wrote and explain, I don’t think they could do it.”
Faculty are increasingly aware of the negative effects that AI has on students’ learning and they are wary of assigning work that can easily be completed with AI.
Both Lang and Shealy agreed that AI will be a permanent part of students’ lives after graduation. Faculty and programs are collaborating to help students to use AI safely and in a way that supports research, originality and society.
One development Tulane has already pursued is formal policy shifts and the creation of multiple committees of faculty to explore AI in the classroom. Tulane now requires faculty to have an AI statement in their syllabus.
“This is actually the first semester faculty have been required to have an AI statement on their syllabus,” Lang said. “Before, it was kind of a ‘Don’t Ask, Don’t Tell’ policy, which isn’t sustainable.”
Promoting a more transparent and adaptive surrounding AI is critical for the ethical use of AI among students, according to the AI committee report.
“I think having students be the human in the loop … [who] use AI to do some of their research, but then constantly refine and provide more feedback to the AI to get better responses. So that kind of critical analysis I think is really important,” Lang said.
Effectuating data literacy will also encourage the ethical use of AI, according to Lang.
The Conolly Alexander Institute for Data Science offers courses on AI tools and the evolution of AI. The Center for Community-Engaged Artificial Intelligence is a multi-disciplinary team of scientists, students and community members dedicated to fostering human-centered AI use that will benefit society.
There are several challenges that will come with helping students become more AI literate and more responsible, including time, environmental concerns and fear of using AI in the first place.
Dunning agreed that the university should effectuate AI literacy, but also consider its environmental impact.
“Rather than outright banning it, [we should be] educating people on how to use it appropriately,” Dunning said. “But I also think there’s a really big question mark around the environmental impacts of it, especially with the energy center being built in Louisiana.”
