Today’s digital ecosystem creates the perfect storm for identity theft. AI makes every step — from stealing personal information held by companies to finding the right Social Security Number to steal to faking a driver’s license — easier and more sophisticated.
Some AI research labs are already acting with an abundance of caution due to fears of cyberattacks. Anthropic PBC is rolling out its new model, Mythos, to a select group of companies for testing against their own products and looking for vulnerabilities. Mythos is able to find loopholes in all sorts of operating systems, even exploiting Linux, the open-source code that powers most smart TVs, cars and other electronics, according to employees at Anthropic. OpenAI is also shopping its equivalent model around for companies to test.
When one researcher at Anthropic tested Mythos, they found it was able to pull off the equivalent of a digital bank robbery. The cautionary tales from Anthropic are prompting government officials to send up a flare to the financial sector.
We need personal AI that answers and screens your chats and phone calls. I can see a spam call a million miles away, but my mom can’t.
btoned on
Good thing AI is only used by good productive worker bees and not any malevolent actors!
accessoiriste on
Isn’t that the entire point of generative AI? What am I missing?
Medical_Tailor4644 on
The scary part is scammers don’t even need perfect AI anymore, just “convincing enough for 30 seconds. Most people still trust voices, screenshots, video clips, and official-looking messages way more than they probably should now.
Typical_Depth_8106 on
The rapid evolution of artificial intelligence has fundamentally altered the landscape of digital deception by removing the traditional friction that once limited the scale and sophistication of fraudulent activities. In the past, high-level scams required a significant investment of human time and a specific set of linguistic or technical skills, which naturally throttled the volume of attacks. Today, generative models have democratized these capabilities, allowing actors to synthesize highly convincing personas and narratives at a speed that human oversight cannot easily match. This shift represents a transition from artisanal, manual labor to an industrial-scale automation of dishonesty.
As these systems process vast amounts of behavioral data, they become adept at mimicking the subtle nuances of human interaction, making it nearly impossible for the average individual to distinguish between a genuine request and a machine-generated fabrication. The barriers to entry have vanished as sophisticated coding and persuasive writing are now available through simple prompts. This allows for the creation of deepfake audio and video that can bypass biometric security measures or manipulate emotional responses in real time. Because the technology can iterate on its own failures, learning from every blocked attempt, the cycle of innovation in fraud now moves at a pace that renders traditional defensive strategies reactive rather than preventive.
The difficulty in stopping these digital incursions stems from the fact that the underlying technology is integrated into the very fabric of modern communication. When the tools used for legitimate creative expression and productivity are the same ones used to engineer a breach, the environment becomes one of perpetual uncertainty. This systemic vulnerability suggests a profound change in the human experience of the digital realm, where the assumption of authenticity is replaced by a necessary and constant vigilance. We are witnessing a moment where the architecture of trust is being reconfigured, as the ease with which a digital identity can be forged forces a total reassessment of how we verify reality in an increasingly virtual world.
6 Comments
*Jennah Haque for Bloomberg News*
Today’s digital ecosystem creates the perfect storm for identity theft. AI makes every step — from stealing personal information held by companies to finding the right Social Security Number to steal to faking a driver’s license — easier and more sophisticated.
Some AI research labs are already acting with an abundance of caution due to fears of cyberattacks. Anthropic PBC is rolling out its new model, Mythos, to a select group of companies for testing against their own products and looking for vulnerabilities. Mythos is able to find loopholes in all sorts of operating systems, even exploiting Linux, the open-source code that powers most smart TVs, cars and other electronics, according to employees at Anthropic. OpenAI is also shopping its equivalent model around for companies to test.
When one researcher at Anthropic tested Mythos, they found it was able to pull off the equivalent of a digital bank robbery. The cautionary tales from Anthropic are prompting government officials to send up a flare to the financial sector.
[Read the full story here.](https://www.bloomberg.com/graphics/2026-ai-identity-theft-scams/)
We need personal AI that answers and screens your chats and phone calls. I can see a spam call a million miles away, but my mom can’t.
Good thing AI is only used by good productive worker bees and not any malevolent actors!
Isn’t that the entire point of generative AI? What am I missing?
The scary part is scammers don’t even need perfect AI anymore, just “convincing enough for 30 seconds. Most people still trust voices, screenshots, video clips, and official-looking messages way more than they probably should now.
The rapid evolution of artificial intelligence has fundamentally altered the landscape of digital deception by removing the traditional friction that once limited the scale and sophistication of fraudulent activities. In the past, high-level scams required a significant investment of human time and a specific set of linguistic or technical skills, which naturally throttled the volume of attacks. Today, generative models have democratized these capabilities, allowing actors to synthesize highly convincing personas and narratives at a speed that human oversight cannot easily match. This shift represents a transition from artisanal, manual labor to an industrial-scale automation of dishonesty.
As these systems process vast amounts of behavioral data, they become adept at mimicking the subtle nuances of human interaction, making it nearly impossible for the average individual to distinguish between a genuine request and a machine-generated fabrication. The barriers to entry have vanished as sophisticated coding and persuasive writing are now available through simple prompts. This allows for the creation of deepfake audio and video that can bypass biometric security measures or manipulate emotional responses in real time. Because the technology can iterate on its own failures, learning from every blocked attempt, the cycle of innovation in fraud now moves at a pace that renders traditional defensive strategies reactive rather than preventive.
The difficulty in stopping these digital incursions stems from the fact that the underlying technology is integrated into the very fabric of modern communication. When the tools used for legitimate creative expression and productivity are the same ones used to engineer a breach, the environment becomes one of perpetual uncertainty. This systemic vulnerability suggests a profound change in the human experience of the digital realm, where the assumption of authenticity is replaced by a necessary and constant vigilance. We are witnessing a moment where the architecture of trust is being reconfigured, as the ease with which a digital identity can be forged forces a total reassessment of how we verify reality in an increasingly virtual world.