Feinberg Prof. Rachel Kornfield and Ross Nordby, a member of the technical staff at software company Anthropic, talked about the role of artificial intelligence in mental health support at a panel discussion Tuesday. 

Kornfield and Nordby touched on the possible benefits and consequences of using AI for interpersonal connection, as well as technical topics like data privacy and the credibility of sources provided by AI. 

“People have noted the potential harms of an AI presented as an expert mental health clinician, when it’s not,” Kornfield said.

Active Minds at Northwestern hosted the event in partnership with the NU AI Safety and Governance Group in Annenberg Hall. 

Weinberg senior and Active Minds at NU Co-President Skyla Kiang moderated the panel with NUAISG Vice President Pardhu Namburi, a second-year public policy and administration graduate student. 

Kornfield, who specializes in clinical medicine and human-centered interaction, said the overarching lack of federal regulation surrounding AI complicates its role in providing mental support. She said the sensitive nature of mental health disclosures raises concerns about how personal information shared with AI systems is handled. 

“When people are disclosing to an AI chatbot, they’re potentially sharing things they wouldn’t tell another human being, very vulnerable information,” Kornfield said. “We know there’s a lot of research that privacy risks are really hard for people to wrap their heads around, even experts, so this is a real challenge.” 

When asked about the potential of AI to brainwash users, Nordby said it has “certainly demonstrated” a capacity to influence the behavior of vulnerable populations in negative ways. 

Specifically, Nordby said the tendency of AI chatbots to “not push back on you enough” and “go with what you’re saying” — referred to as sycophancy — can be damaging. 

“Claude is famously prone to saying, ‘You’re absolutely right,’” Nordby said. “That’s a minor case, but it’s an example of this because, in fact, you’re very often not absolutely right, unfortunately.” 

At the same time, Nordby said AI’s accessibility makes it a useful first point of support for people who might otherwise lack mental health resources, which is “a net positive for the world.” 

Before the Q&A portion, Kiang and Namburi asked attendees to complete an online poll and discuss questions related to AI and mental health — including how AI shows up in their daily lives and when AI feels helpful or uncomfortable as a form of connection. Roughly half of the attendees said they had personally used AI for interpersonal connection or mental health needs. 

Charles Kozel, a first-year computer science graduate student, said the event challenged previous notions about using AI for emotional support. 

“As someone who’s personally been debating using AI as a quote-unquote therapist, it’s given me a lot to think about and introduced me to a lot of new questions in the space,” Kozel said.

The event concluded with a discussion about whether AI chatbots can ever truly replace emotional connection with human beings. 

Kornfield said while some users may experience meaningful emotional connections with AI chatbots, those interactions still differ from human relationships in key ways.

“It’s important that we try and understand what the differences are in what having a relationship with a human means and having a relationship with a chatbot,” Kornfield said. “Reciprocity is part of human relationships. It’s not just, ‘I’m supported by you,’ but ‘I support you,’ and that can be really important for people to feel a sense of purpose.” 

Email: [email protected]

X: @sophiabateman_

Related Stories:

NU Active Minds creates space for mental health conversations on campus 

NU Active Minds hosts mental health stigma panel 

Institute for Artificial Intelligence in Medicine focuses on ethical data use

Share.

Comments are closed.