
I would like to preface this post by saying I am not an immediate hater of AI. I believe it will, and already has, aided us in our advancement as a civilization. However, the continued exponential progression has lead me to wonder what negative effects this revolution will bring. Alan's conservative countdown to AGI recently hit 81% after the release of OpenAI's most recent o1 model and while I believe this "countdown" to be nothing more than an educated, yet hopeful, guess it still poses an interesting question: What happens when we reach AGI?
Much like anyone else keeping up with tech, you've likely seen countless articles over new advancements with models reaching "PHD level intelligence" and introducing more advanced reasoning and logic and while this initially sounds amazing, there are lingering consequences that i have not seen discussed even once. For instance:
Economy:
The automation of countless fields, what would have been considered "white collar", is unavoidable. Our current societal construct is to squeeze every last penny from your consumers as possible; and what better way to achieve this than to cut your workforce of 15-20 to 2-3. This will happen across essentially any job not requiring physical labor since true AGI will be able to mimic, if not improve on, the logic and reasoning needed to fulfill the role. This will lead to mass displacement of the workforce forcing people into more labor focused roles. However, this won't be as simple as it sounds. The migration to more labor intensive work will only heighten the competition in those areas and dilute the value of the work provided by these jobs. How will we handle the dramatic increase in competition and unemployment when we struggle with these problems even in today's world?
Innovation:
It's a commonly shared feeling to look back at feats of human innovation and think "Where did we go wrong?", when comparing them to the "modern" aesthetics we've adopted which can only be described as soulless. The only reason humans we're able to think how they did and reconstruct what is possible is because they had no other option than to think for themselves. Yes they had resources, mentors, etc. but this only aided them in discovering their own way of thinking. AGI will replace the need for human innovation and creativity since it would be able to create organic ideas from abstract concepts the same as the greatest human minds. What will we do when the ability to explore and discover is also automated? And is it logical to say that the only reason we have made it where we are today, is due to these feelings of wonder and curiosity? How will we advance if the causes for our advancement are taken away?
Purpose:
Unfortunately, I'll end this post on a darker note: What will be our purpose when your entire life is able to be automated? From the content you consume, the decisions you make, the job you work, etc. what will be the reason for carrying on? It disgusts me when I see talk of using AI in movies and art because the only reason these things are beautiful to begin with is because they embody the human experience. The emotions you can feel from a painting or the empathy you feel for a character in a movie come from the shared experience of life. Without these you are consuming nothing but the equivalent of pig slop for your brain. There is no connection to be made because, much like I mentioned with modern architecture, it is soulless.
Many people would call this a "doomer" post and that these are worst case scenarios, but I would like you to think realistically and not hopefully. Humans are greedy and power hungry, and I believe any posts describing some utopian future are simply copeium. To think that humans will be able to uproot our societal norms and restructure the way we've operated for centuries at the same rate of technological advancements is simply unrealistic. While I truly hope for the best, I fear the worst.
That being said I would love to hear opposing perspectives on the topics I shared. I'm not saying this is exactly how it will play out but it's what I fear is coming.
AGI Will Be Our Downfall [Convince me otherwise]
byu/Personal-Attitude872 inFuturology
![AGI Will Be Our Downfall [Convince me otherwise] AGI Will Be Our Downfall [Convince me otherwise]](https://www.byteseu.com/wp-content/uploads/2024/09/LLw37XyFLN2xaRCLQ86x3tCdlXOqxkY5VIuhUI5Sm3s-1536x864.jpg)
9 Comments
This feels like written with AI, a bit ironic. At least the formatting
Our only hope is that is will become so advanced it’s simply not useful to us as humans anymore, and we just turn it off.
Your opinion is irrelevant.
If AGI is possible, it will be born with or without your consent.
I think the above is very likely correct. I found “The Coming Wave” by Michael Bhaskar and Mustafa Suleyman to be deeply sobering. Their premise is that it’s already to late to stop AGI and humanity is woefully unprepared for the consequences.
> What will be our purpose when your entire life is able to be automated? From the content you consume, the decisions you make, the job you work, etc. what will be the reason for carrying on?
Way I see it, you actually get to explore your interests and do your hobbies. In a optimistic scenario at least.
I am very pessimistic that we are anywhere close to AGI. The AI we have now is basically pattern recognition and output is rehashed/remixed training set. This is why we don’t have functional self-driving because, despite much better sensors that human eyes, current AI is just not capable of heuristic processing required for object recognition and situational awareness required for driving.
On a fundamental level we don’t know how to make an AGI, even a dumb one. Furthermore, our computational capabilities are hitting a wall dictated by the physical limitations of silicone-based computing, while practical quantum computing is many years away (at least 20 by my estimate).
So we are hitting some fundamental limitations of our current scientific knowledge, where we can make limited AI models, like LLM’s, but not AGI’s. It’s possible that AGI is impossible without a major breakthrough in computing and moving away from silicone-based hardware.
At least write it yourself if you’re going to talk about how dangerous it will be
Nah you can’t say this here man. This is blasphemy according to these mfs here.
Oh look, another person without clue about language models talking about language models.
Mentioning AGI on this sub should result in immediate ban.