
I recently published an open-access chapter investigating a question at the heart of our technological future: what happens to human autonomy and agency as we increasingly rely on AI recommendation engines?
The research examines how tools like Google Maps, YouTube recommendations, and search engines don't just help us – they fundamentally transform how we:
- Form intentions and make decisions
- Process information and consider options
- Remember and retrieve information
Drawing on extended cognition theory, I explore how our "data doppelgängers" (the digital profiles platforms create about us) become extensions of ourselves in ways that previous technologies never did.
To quote from the chapter: "First we shape our profiles; thereafter, they shape us." This raises profound questions about the future relationship between humans and AI systems.
As we move toward more sophisticated AI systems, I believe we need to reconsider what "human-centered AI" truly means beyond just respecting rights – we need to consider how these systems change what it means to be human.
Chapter link: https://dx.doi.org/10.1201/9781003320791-5
I'd love to hear this community's thoughts on where this relationship is heading. Is cognitive augmentation through AI a step toward transhumanism, or are we sacrificing essential human qualities?
[Research] As we delegate more thinking to AI, are we becoming more "superhuman" or just more dependent?
byu/Osho1982 inFuturology
17 Comments
I’m pretty sure we’d become more dependent. I remember seeing an article on a study that showed that people who used AI to figure things out were having their critical thinking ability deteriorate.
The AI can’t even give me the correct answer in a Google search about a well-known cartoon character. We are being shaped by a tsunami of misinformation.
we have become more stupid and dependent because we no longer think about whether something is true or not, we just accept what AI tells us.
People desperately need to learn how to use AI. It ranges from virtually worthless, to a solid shortcut depending on what *and how* you ask it.
For those who just slap words and punctuation into it, they are definitely dependent. For those who understand how to curate a question about something specific, they are *less* dependent.
I don’t think AI is making anyone “superhuman”.
Before diving in what’s your take on the hard problem of content?
I like to think it will make us superhumanly dependent.
So far i would see it more as another tool under our belt. It hasn’t made us any more super human than say the internet or computing has.
I think it’s a very limited way of thinking of it:
Personally, broadly I feel like humans as a species have constantly built new tools to do things we weren’t able to do without them. LLMs (I hate the generalization to AI as it’s a much broader field and it’s tiring to pretend that AI equals LLMs but I digress), are a tool in the end. A tool with limits, problems and it’s not good at fixing everything.
One aspect of all of this, that I find very interesting is the question “will we evolve based on the tools we created?” With that I mean: as humans we are really bad at being connected with so many other humans. You see it in the past with celebrities etc. really going off the rails and part of it is having so many connections to deal with. Now everyone is dealing with that same thing: we have so many things (so much information as that article references) bombarding us constantly that we don’t fully have a way to digest it. And being unable to deal with this has really significant negative impact on us to the point of self destruction in one way or another. I’m really curious to see if, a few generations from now we’ll be able to better deal with the huge amounts of information we receive from everything around us.
I think if we become “super human” it would be due to us evolving to better utilize these tools of mass peer to peer communication. Using tools has always been a defining trait of apes and humans in particular, so to me it feels like a step on the road we’ve been traveling on for a long long time. I don’t think this new tool will fundamentally change us any more than the bigger changes we’ve already been through, and I find it hard to truly compare LLMs with the industrial revolution or the invention of the internet: thinking it is *that* revolutionary is folly imho.
I’d like to note: since this is the internet and all forms of nuance are lost. The question you pose is a good and valid one. I personally just have some others that I find more interesting, especially in terms of the conclusion (if any)
It would be interesting if you could tweak your digital doppelganger into making all of the algorithms subtly guide your internet experience to something you want but don’t do naturally yourself. Perhaps you can have it subtly make you better at math, or language learning, or organization. You don’t have to actively pursue any of it, but the algorithm just filters all of your incoming information that way. Lightly at first but more and more as time goes by until the subject is a part of your life. Kind of like a guiding subconscious impulse to do that thing. over the course of a few months most of what you get on the internet is about language, math, organizing or whatever. You become “that guy” by the exposure to it.
I’d say yes to both. Humanity has been transforming from independent but weaker beings into a more potent but dependent superbeing for a very long time, since at least far back as the advent of writing and note-taking. Even then, it was controversial: Socrates was skeptical of notes for the same reasons that you are skeptical of AI.
I’d say that AI needs to become more reliable before it can reach its full potential, but some day it’ll feel completely normal to outsource some of the things that we do to ai.
Don’t know about superhuman but superstupid unequivocally. Next, rulers are inorganically evolved species.
It’s like GPS: super helpful, until you forget how to read a map.
I would argue that we are becoming more superhuman AND more dependent, but is that not true of all technologies?
As I encountered your post during my morning news scan, I confess I did not read your chapter. However in response to your question, “Is cognitive augmentation through AI a step toward transhumanism, or are we sacrificing essential human qualities?” my knee-jerk reaction is that while pocket calculators may have resulted in fewer people who can do basic arithmetic, algebra and geometry without them, it did not have a detrimental on human progress any more than the wheel, the plow, metallurgy, electricity, etc.
I realize this may be an outdated heuristic, (calculator etc. impact = AI impact) and may have already been disproven for its flaws, but technologies that reduce the amount of energy required to accomplish a task have not been harmful in the past. They may have harmful side effects yes, all technology is a two-edged sword, but they have not been harmful in terms of human progress. I assume you used calculators or computers in your statistics courses?
My view does not mean that I assume the benefits outweigh the dangers, or that caution is not sensible, but only that the development of AI is less an anomaly than it appears.
It will wreck most of us.
Only few will manage to maintain cognitive performance.
And that is going to be the next real schism in society.
When people started storing their numbers in phones, few remember numbers anymore. It’s called cognitive offloading. If we look to AI to do our thinking, same thing will happen.
It depends on how we as a civilization decide to use LLMs.
If we use it to arrive at the answer to the problem, we become dependent. If we use it to obtain the solution to figure out the answer to a problem, it becomes a tool.
It has been especially hotly debated within software engineering about whether AI will kill jobs by replacing the 80% of the workers who only does the 20% of the work, and what people can do to avoid being replaced. The advice has almost always been the same: Use it as a glorified search engine, not a glorified calculator, and use the How in the answer, not the What.
Like many things else, it will be a boon to people who use it properly. It will be a crutch and even a detriment to people who outsource their thinking to it.
EDIT: My rule of thumb when asking a LLM a question is like this: If I’m studying under a mentor, and I ask my mentor this question, would they give me an answer, or would they tell me to figure it out myself? If the former, I ask the question, if the latter, I change my question to something else.
More superhuman? Or more dependent?
Please, the less I have to think about bullshit I can delegate – the more I can think about stuff I really need. It does not makes me better – it makes me able to do narrow stuff better, nothing more.
But in the same time it surely makes me more dependent. On a technology, not on specific services – they’re largely interchangeable.
Same as each technology does.