SS: Researchers at Cambridge are warning about a new ‘intention economy’ where AI tools predict and influence what you do – like booking a trip or even voting – and sell that info to the highest bidder. Using models like ChatGPT, AI could personalize suggestions based on your data, subtly steering you in specific directions. The study raises big questions about how this could impact elections, competition, and personal freedom. Should we be worried, or is regulation the answer before it gets out of hand?
Plenty_Intention1991 on
They already did duh. Musk beta tested it and it worked.
CrispyDave on
I don’t see how this is any different to the marketing algorithms we’re subject to already, just a different way to achieve the same thing.
dave_hitz on
Soon? Yo! Have you ever visited Netflix or Amazon? TikTok? Youtube? Companies have been using AI tools to manipulate my online decision making for **years**.
LastAvailableUserNah on
Cant manipulate me if Im too depressed to care. Check and mate.
ZombiesAtKendall on
Yeah, no real surprise there, they will probably have comments specifically targeted at you, for all you know, this is one of those comments. And maybe you say that I am just making this up, but maybe that’s what the algorithm wants me to say to make you think what you think, like nah this isn’t some targeted comment, when really it is, like reverse, inverse, counter intuitive, backwards, reworked psychology. Knowing you better than you know you. What makes you angry, what turns you on, what angrily turns you on. Ring a bell, suddenly you’re manhattan man, turning out tricks to subdue your underdoses, don’t understand what I mean? You soon will. Backstreet Boys reunion tour 2027.
SmashinglyGoodTrout on
That’s the goal yes. Everything is now made to part us with our money and steer our decision making in favour of what the powers that be want.
Garthritis on
Now in other news, the Sun rises in the East and sets in the West.
DaGriff on
Ive been thinking lately about all the ways the internet can be broken by AI. Fundamentally i think that it is possible for AI to undermine the trust that people have in using the internet. Once people become wise to it and the reality is that every time you interact with the internet its trying to manipulate you for someone else gain. That the point where trust is undermined. There will always be people who don’t care but there will also be others who will stop using it. The question I have is who is allowing AI chats to initiate conversation with them? I guess is not long before they have access to messaging apps and simply just send you a message. Yikes! Once trust is undermined its over.
Fred_Oner on
Why do these titles always say “may”? We already know THEY ARE… Do we really need to lie to ourselfs? There’s a reason why someone died of “lead poisoning” not long ago, due to these biased for profit “AI tools” denying claims that shouldn’t have been denied to begin with. And for what? To increase their profit, so some useless CEO and shareholders can make more money? I guess I can confidently say, that yes AI tools ARE indeed manipulating people online.
T1Pimp on
Maybe it’ll make it faster or accessible by more businesses but Amazon has been manipulating every user of their site in this way for ages.
SwamiHamster on
My online decision-making may soon manipulate AI tools.
Plenty_Advance7513 on
I think it’s going to come out that states have been using a.i. behind the scenes for decision making scenarios or evaluating people in more ways than one
Phoenix042 on
“soon”
Ah yes, the near future of literally right the fuck now.
the_storm_rider on
What do you mean “soon”? It’s been happening for a decade now.
dillpiccolol on
My manager literally follows anything the tool says.
16 Comments
SS: Researchers at Cambridge are warning about a new ‘intention economy’ where AI tools predict and influence what you do – like booking a trip or even voting – and sell that info to the highest bidder. Using models like ChatGPT, AI could personalize suggestions based on your data, subtly steering you in specific directions. The study raises big questions about how this could impact elections, competition, and personal freedom. Should we be worried, or is regulation the answer before it gets out of hand?
They already did duh. Musk beta tested it and it worked.
I don’t see how this is any different to the marketing algorithms we’re subject to already, just a different way to achieve the same thing.
Soon? Yo! Have you ever visited Netflix or Amazon? TikTok? Youtube? Companies have been using AI tools to manipulate my online decision making for **years**.
Cant manipulate me if Im too depressed to care. Check and mate.
Yeah, no real surprise there, they will probably have comments specifically targeted at you, for all you know, this is one of those comments. And maybe you say that I am just making this up, but maybe that’s what the algorithm wants me to say to make you think what you think, like nah this isn’t some targeted comment, when really it is, like reverse, inverse, counter intuitive, backwards, reworked psychology. Knowing you better than you know you. What makes you angry, what turns you on, what angrily turns you on. Ring a bell, suddenly you’re manhattan man, turning out tricks to subdue your underdoses, don’t understand what I mean? You soon will. Backstreet Boys reunion tour 2027.
That’s the goal yes. Everything is now made to part us with our money and steer our decision making in favour of what the powers that be want.
Now in other news, the Sun rises in the East and sets in the West.
Ive been thinking lately about all the ways the internet can be broken by AI. Fundamentally i think that it is possible for AI to undermine the trust that people have in using the internet. Once people become wise to it and the reality is that every time you interact with the internet its trying to manipulate you for someone else gain. That the point where trust is undermined. There will always be people who don’t care but there will also be others who will stop using it. The question I have is who is allowing AI chats to initiate conversation with them? I guess is not long before they have access to messaging apps and simply just send you a message. Yikes! Once trust is undermined its over.
Why do these titles always say “may”? We already know THEY ARE… Do we really need to lie to ourselfs? There’s a reason why someone died of “lead poisoning” not long ago, due to these biased for profit “AI tools” denying claims that shouldn’t have been denied to begin with. And for what? To increase their profit, so some useless CEO and shareholders can make more money? I guess I can confidently say, that yes AI tools ARE indeed manipulating people online.
Maybe it’ll make it faster or accessible by more businesses but Amazon has been manipulating every user of their site in this way for ages.
My online decision-making may soon manipulate AI tools.
I think it’s going to come out that states have been using a.i. behind the scenes for decision making scenarios or evaluating people in more ways than one
“soon”
Ah yes, the near future of literally right the fuck now.
What do you mean “soon”? It’s been happening for a decade now.
My manager literally follows anything the tool says.