A California appellate court recently affirmed a $4 million jury verdict in favor of a police captain after a sexually explicit, AI-generated image resembling her was circulated among her colleagues. In Washington state, a trooper filed suit alleging a supervisor used AI to create and distribute a deepfake video depicting him kissing a co-worker. Both cases made headlines and both were filed under workplace law.
But Bradford Kelley, a shareholder at Littler Mendelson who focuses on AI and employment law, describes these instances in a recent brief. He says HR leaders risk missing the bigger picture if they consider this a cybersecurity issue and move on.
Deepfakes aren’t just a cybersecurity threat
“It’s not just deepfakes,” Kelley told HR Executive in an interview. “If somebody uses a generative AI tool to generate a song that shows they’re romantically interested in a colleague, that’s not necessarily a deepfake issue, but it’s definitely an issue where AI could be weaponized.”
The wave of AI policies HR teams drafted over recent years was largely focused on a different problem set, including protecting confidential data, managing IP risk and ensuring accuracy in AI-assisted work. Many may not have included how to handle an employee using a readily available AI tool to harass, humiliate or intimidate a co-worker.
The distinction matters because the barrier to entry for crafting AI materials is now essentially zero. Producing a harassing song, a romantic story involving a real colleague, a fabricated conversation or a mocking image no longer requires technical skill or significant effort. This could alter the risk calculus for HR leaders in ways that haven’t been widely discussed.
EEOC and regulatory risks
As deepfake technology becomes increasingly available, workplace risks also ramp up. “The potential consequences span a wide array of legal domains, including employment discrimination, privacy law violations, intentional infliction of emotional distress and even criminal liability,” according to Littler’s brief.
Brad Kelley, Littler
The U.S. Equal Employment Opportunity Commission has already moved to address AI-generated harassment explicitly. Its enforcement guidance on workplace harassment identifies the sharing of “AI-generated and deepfake images and videos” as examples of conduct that can constitute unlawful harassment based on protected characteristics.
And the legal exposure extends beyond sexual content. Attorneys at Littler note that AI tools can be used to generate manipulated images targeting an employee’s race, disability, religion or national origin. That’s a Title VII problem, a potential Americans with Disabilities Act problem, and a hostile work environment claim regardless of whether anyone called it a deepfake.
New legislation is also moving quickly. The federal TAKE IT DOWN Act and Florida’s Brooke’s Law both mandate the removal of nonconsensual intimate AI-generated content within 48 hours, signaling that the legislative environment around this issue is tightening fast.
Beyond policy gaps, HR leaders need to think about what happens when a complaint lands on their desk. The standard investigation playbook was built for a world where authorship was relatively straightforward. AI complicates that. When a harasser can blame AI, HR now faces attribution questions that existing frameworks weren’t designed to handle.
Advice for HR leaders
Kelley and his colleagues at Littler recommend that employers begin treating AI-generated content with the same rigor as physical evidence. That’s a meaningful shift from how most HR investigations operate today.
Update the policy language
Existing anti-harassment policies should explicitly prohibit the creation or distribution of AI-generated content that demeans or harasses employees based on protected characteristics. The language should be specific enough that employees cannot reasonably claim ambiguity.
Retool training
Standard harassment training doesn’t address this scenario. HR leaders should consider adding concrete examples of AI-facilitated harassment (the romantic song, the fabricated conversation, the altered image) so employees understand that using an AI tool isn’t an excuse.
Prepare the investigation infrastructure
HR teams and their legal counsel should think through now, before a complaint arrives, how they will handle digital evidence, assess credibility and document findings in cases where AI is involved.
