“In some markets, we have found that more than 70% of new enrolments may be deepfake attempts,” he notes. “We’ve identified more than 150 types of deepfake attacks.”
Last year, Microsoft warned that AI-generated deepfakes are now highly realistic and increasingly simple for anyone to produce. The company flagged that deepfakes were now increasingly being used in fraud, and called for new legislation to curb bad actors from using these technologies.
For example, cybersecurity experts say that North Korean IT workers use deepfaked identities to get jobs at leading tech firms and funnel earnings back to the isolated country.”
Gawkhimmyz on
make corporations responsible for what the AI creates…
DragonWhsiperer on
This sort of stuff reads like the rise of remote meetings over the last couple of years from Covid will be coming to a close, or at least be limited to users on trusted systems.
And it may just increase the incentive to meet in person, especially for initial meetings and generate “verifiably humans” codes from that.
And as others said. Make AI generators owners companies responsible for the content it spews out. No more free wheeling with the fabric of society.
barrsm on
I’m sure it will become a factor in sentencing that someone used AI in committing fraud. Meanwhile, replacing workers with AI will remain legal.
WeRegretToInform on
This may be what kills remote working. Even if we do in-person interviews, or require them to show up for a week of on-boarding, how do I know that they aren’t just a front-man for a deepfake group?
As a company WFH increasingly seems like more trouble than it’s worth.
Talentagentfriend on
That’s the thing with AI, it is a threat to everyone. It isn’t AI itself that is going to ruin us, but how humanity deals with it will. It will blur the line between reality and falsehoods. And it is developing so fast that we won’t be able to regulate it. It’s already not being regulated. There are no protective guardrails whatsoever. This is only the beginning.
Anastariana on
Do some basic background checks and people won’t be able to deepfake their way into your company.
lowrads on
That’s one way to get back to a firm handshake and a winning attitude.
Realistically though, it’s more likely that companies will want to go through channels, which means that “talent agencies” will be hawking vetted human chattel. I suppose the real question is how do we get the flesh floggers to adopt the same roles as proper talent agents or have an incentive to negotiate for the workers’ interests.
live4failure on
It’s only going to get worse. You really don’t think these greedy corporations won’t offshore all the jobs to an AI farm in China or India at less than 70% of the cost on purpose? It doesn’t matter if they can fake identities, they will certainly( legal or not) lease 1000s of more developed AI bots over time. Imagine one guy owning 10000 virtual AI slaves/servers that he can get to do remote work anywhere in the world at faster speed, with no fatigue, no emotions to manage, and cheaper labor. Not a single one or these greedy fucks would pass up an opportunity like that, even if it destroys our economy and starves our children.
AffectionateSteak588 on
Remote interviews and jobs are going to die off so fast.
jaeldi on
What are “new enrollments” in the financial sector? Is that new-speak for “employees”?
Shloomth on
Good. Those people have always done nothing to contribute to society and have always made far more money than they should.
ActiveBarStool on
these companies realize there’s literally (highly accurate) algorithms to catch AI generated pictures right
Intrepid-Account743 on
Are we going to see the old paper CV returning because of this AI shit?
14 Comments
“In some markets, we have found that more than 70% of new enrolments may be deepfake attempts,” he notes. “We’ve identified more than 150 types of deepfake attacks.”
Last year, Microsoft warned that AI-generated deepfakes are now highly realistic and increasingly simple for anyone to produce. The company flagged that deepfakes were now increasingly being used in fraud, and called for new legislation to curb bad actors from using these technologies.
For example, cybersecurity experts say that North Korean IT workers use deepfaked identities to get jobs at leading tech firms and funnel earnings back to the isolated country.”
make corporations responsible for what the AI creates…
This sort of stuff reads like the rise of remote meetings over the last couple of years from Covid will be coming to a close, or at least be limited to users on trusted systems.
And it may just increase the incentive to meet in person, especially for initial meetings and generate “verifiably humans” codes from that.
And as others said. Make AI generators owners companies responsible for the content it spews out. No more free wheeling with the fabric of society.
I’m sure it will become a factor in sentencing that someone used AI in committing fraud. Meanwhile, replacing workers with AI will remain legal.
This may be what kills remote working. Even if we do in-person interviews, or require them to show up for a week of on-boarding, how do I know that they aren’t just a front-man for a deepfake group?
As a company WFH increasingly seems like more trouble than it’s worth.
That’s the thing with AI, it is a threat to everyone. It isn’t AI itself that is going to ruin us, but how humanity deals with it will. It will blur the line between reality and falsehoods. And it is developing so fast that we won’t be able to regulate it. It’s already not being regulated. There are no protective guardrails whatsoever. This is only the beginning.
Do some basic background checks and people won’t be able to deepfake their way into your company.
That’s one way to get back to a firm handshake and a winning attitude.
Realistically though, it’s more likely that companies will want to go through channels, which means that “talent agencies” will be hawking vetted human chattel. I suppose the real question is how do we get the flesh floggers to adopt the same roles as proper talent agents or have an incentive to negotiate for the workers’ interests.
It’s only going to get worse. You really don’t think these greedy corporations won’t offshore all the jobs to an AI farm in China or India at less than 70% of the cost on purpose? It doesn’t matter if they can fake identities, they will certainly( legal or not) lease 1000s of more developed AI bots over time. Imagine one guy owning 10000 virtual AI slaves/servers that he can get to do remote work anywhere in the world at faster speed, with no fatigue, no emotions to manage, and cheaper labor. Not a single one or these greedy fucks would pass up an opportunity like that, even if it destroys our economy and starves our children.
Remote interviews and jobs are going to die off so fast.
What are “new enrollments” in the financial sector? Is that new-speak for “employees”?
Good. Those people have always done nothing to contribute to society and have always made far more money than they should.
these companies realize there’s literally (highly accurate) algorithms to catch AI generated pictures right
Are we going to see the old paper CV returning because of this AI shit?
Still, it might revive the Post Office.
Every cloud…