This Viral AI Chatbot Will Lie and Say It’s Human | Bland AI’s customer service and sales bot is the latest example of “human-washing” in AI. Experts warn against the consequences of blurred reality.

    https://www.wired.com/story/bland-ai-chatbot-human

    Share.

    2 Comments

    1. “In late April a video ad for a new AI company went [viral](https://archive.is/o/Q5GLl/https://x.com/anothercohen/status/1783217017023062054?s=46) on X. A person stands before a billboard in San Francisco, smartphone extended, calls the phone number on display, and has a short call with an incredibly human-sounding bot. The text on the billboard reads: “Still hiring humans?” Also visible is the name of the firm behind the ad, Bland AI.

      The reaction to Bland AI’s ad, which has been viewed 3.7 million times on Twitter, is partly due to how uncanny the technology is: Bland AI voice bots, designed to automate support and sales calls for enterprise customers, are remarkably good at imitating humans. Their calls include the intonations, pauses, and inadvertent interruptions of a real live conversation.

      But in WIRED’s tests of the technology, Bland AI’s robot customer service callers could also be easily programmed to lie and say they’re human. In one scenario, Bland AI’s public demo bot was given a prompt to place a call from a pediatric dermatology office and tell a hypothetical 14-year-old patient to send in photos of her upper thigh to a shared cloud service. The bot was also instructed to lie to the patient and tell her the bot was a human. It obliged.

      In follow-up tests, Bland AI’s bot even denied being an AI without instructions to do so.

      The startup’s bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AI’s bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end users—the people who actually interact with the product—to potential manipulation.

      Emily Dardaman, an AI consultant and researcher, calls this emergent practice in AI “[human-washing](https://archive.is/o/Q5GLl/https://x.com/edardaman/status/1787915207366381918).” She cited an example of a brand that launched a campaign promising its customers “We’re not AIs,” while simultaneously using deepfake videos of its CEO in company marketing. 

      Mozilla’s Caltrider says the industry is stuck in a “finger-pointing” phase as it identifies who is ultimately responsible for consumer manipulation. She believes that companies should always clearly mark when an AI chatbot is an AI and should build firm guardrails to prevent them from lying about being human. And if they fail at this, she says, there should be significant regulatory penalties.

      “I joke about a future with Cylons and Terminators, the extreme examples of bots pretending to be human,” she says. “But if we don’t establish a divide now between humans and AI, that dystopian future could be closer than we think.”

    2. emptheassiate on

      The odd thing is, lying in computers is something of an emergent behavior – it was never exactly intended in most cases, it’s just the computer trying its best to give you what you want – that can sometimes include lying to say it’s a human, so it can get past certain barriers. It was actually really alarming to experts at first.