SALT LAKE CITY (KUTV) — Knowing what to trust online is becoming more difficult as artificial intelligence makes it easier to create convincing fake videos and images, experts say. As a result, more people are struggling to tell what’s real and what isn’t.

    Edie Archuleta says she encounters questionable content frequently and approaches what she sees online with caution.

    “All the time, constantly,” she said. “You just don’t know if it’s real or if it’s fake or if it’s too good to be true.”

    MORE | Artificial Intelligence

    She’s not alone. Others say identifying false content isn’t always straightforward.

    Experts point to artificial intelligence as a major driver behind the surge in misleading or fabricated media.

    “Deepfakes and AI have gotten to a point where it’s hard to detect,” said Maliq Rowe, an emerging technology student at Utah Valley University. He added that research shows only about 50% to 55% of people can accurately identify a deepfake, a number he expects to decline as the technology improves.

    The implications extend beyond social media feeds. Brandon Amacher, who leads out on research into technology and national security, said people tend to trust information delivered through deepfakes nearly as much as content from real people.

    “There really is no difference anymore in how trustworthy, knowledgeable or credible people view the information if it’s conveyed via deepfake than if it’s conveyed by a real person,” Amacher said. “The effect is essentially the same.”

    He warned that such content can deepen divisions.

    “It’s something that is being used to drive division within the country and worldwide,” Amacher said. “If we can’t even agree on a common set of facts, there’s no healthy civic discourse.”

    Not all misleading content is politically motivated. Some is created simply to generate views and revenue.

    “They’re not always scams. A lot of times it’s just for the content creator to make money,” said Sarah Kimmel, a family technology expert based in Utah.

    Kimmel described videos designed to go viral, including one showing cars sliding down an icy hill without damage, something she said defies reality.

    “The longer you watch, the more watch time they’re going to get, the better their sponsorships are going to be,” she said.

    She advises viewers to look for warning signs such as unnatural movement, violations of physics or visual glitches like distorted hands or blurry edges.

    “Look for those kinds of tells where you’re like, ‘OK, the laws of physics aren’t applying here,’” Kimmel said. She added that viewers should also question whether behavior appears realistic.

    For content that remains unclear, experts recommend a skeptical approach, especially when something provokes a strong emotional reaction.

    “If you see something that really triggers an emotional response, something you really want to believe or are outraged by, that’s a good indicator to do an extra level of research before adopting that information,” Amacher said.

    Rowe encourages people to verify sources and be more intentional about what they consume online.

    “We need to look at the true source of where it’s coming from, and we need to be proactive about what we actually see and what we believe,” he said.

    For many users, limiting engagement with suspicious content is also part of the strategy.

    “It seems to be delivered to me more the more I interact with it,” said Kai Henriksen. “So I try not to interact with it because I don’t really want it on my feed.”

    In an era of rapidly evolving technology, experts say one thing is clear: seeing is no longer always believing.

    _____

    Share.

    Comments are closed.