
The rapid pace of development on these AI tools is worrying, looking at tools I use as a software engineer, it really looks like our jobs are going to be fully automatable a lot sooner than we think, why aren't governments doing more to mitigate the potential effects?
https://buildingbetter.tech/p/the-coming-collapse-of-white-collar

24 Comments
Submission Statement:
The article explores how AI is accelerating the breakdown of traditional white-collar work—targeting roles once considered safe from automation. As knowledge jobs get absorbed by increasingly capable models, what happens to middle-class stability, tax revenues, and national productivity? Should governments begin implementing policy safeguards like upskilling programs, UBI pilots, or taxation on AI productivity gains? I’d love to hear how others see this playing out over the next 5–10 years—especially in service-driven economies like the UK.
These articles are gonna be so wild in hindsight in 10 years when almost nothing has changed
Have you ever seen a group of 10 workers looking at 1-2 other workers doing some “work”? (I am consciously being vague here)
Yeah, more like that.
I think there will be far fewer engineering jobs. But for those who are well-rounded (dare I say full-stack), debug and fix complex issues, are strong communicators and architects who can define requirements for clients that have no idea, and most importantly, can use AI tools effectively, they’ll be in demand.
For senior devs who are coasting, it’s going to be tough. And I have no idea how new junior devs will fare when AI can do their work in a fraction of the time.
And why should they? If most of the taxation is on income from work and these incomes are generated by the AI, they will finally tax companies more and workers less and honestly who cares about work? The important thing is to have a salary.
Acknowledging that the fundamentals of our social systems will end is to acknowledge the end of social conservatism of both the Dems and Repubs. It is the end of neoliberism. We are in an extremely conservative political environment where no major ruling party or even major 2nd party is willing to acknowledge the ending of this social landscape. Everyone is paralyzed to acknowledge the paradigm shift within economics, sociology, and psychology that an ai+humanoid world will bring about in the next 25 years.
I don’t think knowledge jobs are being hit directly. Unless you’re considering the mechanical aspects of code writing or forecasting to be knowledge jobs. My theory is that very soon we will see a crisis where less job seekers are able to think critically because we’ve been giving them the impression that coding/math is THE thing to go for at the expense of arts. This would lead to a glut in sectors like academia, government and most importantly entrepreneurship. So in this sense, knowledge jobs – jobs where you would literally just sit and think about how to creatively solve a problem would indirectly get hit because of AI. Unfortunately, a lot of the upskilling schemes today focus on coping with AI rather than outmanuevering it.
Part of staying relevant in your field is going to include getting good at using the AI tools and being somewhat imaginative in their application. You don’t want to be John Henry going one on one with the steam drill, but have one of your own to bring to the rails.
I do UI/UX and I have more work than ever before and I’m also completely leveraging AI tools to fill in clients gaps. I’ve more than 2.5x my income just for last month.
Because it goes against the foundations of a capitalist system and nobody has a solution
>looking at tools I use as a software engineer
could you please elaborate on what tools you’re talking about? I’m also a software engineer using AI tools and don’t believe it’s even close to fully automating or replacing me. Current gen tools can make me more efficient in certain situations (boilerplate, auto complete) but are hopeless even at writing a suite of jest test cases using patterns that are in wide-spread use across the rest of the codebase, meaning i have to refactor.
AI is also far more likely to create edge case bugs than spot and warn me about them. these tools act like stack overflow copy and paste bots because at their core that’s what they are
>OpenAI is training successive generations of AI agents that start as useful assistants and end up automating nearly all white-collar labour, including the R&D for future AIs.
Unpopular opinion: While AI-driven job displacement is definitely a major concern, the idea that they’re going to take over for scientists and engineers right now is silly. R&D?? There’s a massive difference between summarizing and restating existing texts and doing original research.
One thing we have to keep in mind is that companies like OpenAI are still in the “runway” phase. They’re burning investor cash and seeking more of it, both in massive quantities, with the expectation that they’ll figure out how to become profitable at some point. This means that they have to promise the moon. They have to make everyone believe that they can do these amazing things that will make the investor bucks roll in. But LLMs are what they are. At the end of the day, LLMs can only put together building blocks that we give them. They’re a very sophisticated tool, but there’s no pathway between LLM and genuinely intelligent system. It’s not like if you dump enough money into an LLM, you get general AI out the other end.
There’s a good argument to be made that AI as an industry is very similar to the dot com bubble. And that at some point, the over-inflated expectations will burst. Like the dot com bust, this also doesn’t mean that the technology won’t continue to be transformative, but it does mean that a lot of overhyped companies riding high on AI right now will go bust. If that sounds unlikely, remember when Netscape, Yahoo, and AOL were the biggest players?
To be clear, I think it’s inevitable that someone *will* develop an AI that can do these things that are promised, but it’s not going to be an iteration of the systems we currently call “AI.” A whole new model paradigm is needed.
This is actually a good thing, because it means we have a little more time to prepare for it. A little. Not a lot.
The white collar guys when blue collar jobs were going thanks to automation – 
I’m sorry, did this story about how AI is gonna take people’s jobs, use AI to make that picture? I hate that I can’t fully tell anymore.
Honestly this changes little. I still feel like most engineers have about 5-10 YEARS of earning potential. If I’m a mid or senior dev illl be fine.
It’s really the junior devs that need to be smartest. Personally if I’m in school I’m switching EE right now unless I’m top tier
The problem isn’t automation, the problem is the lack of a welfare state.
Probably programmers and engineers will go like the blue collar manufacturing professions before them. Each year there will be 1-2% less jobs than the year before. As a big surplus of unemployed workers grows the salaries will head downwards for the remaining workers. Some people will still make it and work their whole careers at good incomes.
The guys who get downsized and aren’t able to get another job in the industry will be gaslit as not working hard enough, not studying the right things. The people doing the gaslighting will point to successful programmers still with jobs as proof.
Leaders are certainly talking about AI.
Though the only thing I have seen so far is something that can manipulate documents a bit. I still had to provide the data and tell it what to do.
Next step: Scripting!!
meanwhile they go after the most vulnerable in society and tell them to go get jobs when they’ll soon disappear
Y’all glaze “AI” so hard in here. GPT can’t even give you correct answers to easily googled questions.
Give me an AI that can diagnose and fix hardware, and I’ll still have a job making sure it can continue doing so.
As far as other office workers?
I mean, if a company prefers virtual employees that are inconsistent, lie compulsively, and don’t care about the origin or accuracy of the data they provide, then yea, the AI workforce probably would be better for their business.
Middle managers and CEOs already spend most of their day going to meetings and writing emails. AI will likely have completely different goals to humans and will likely consider this as pointless ‘busywork’. AI may not be interested in doing this work at all. AI is likely to have entirely different goals to humans.
Humans go to meetings all day because humans are social creatures and do a lot of things to appear important and acquire status (such as wear expensive suits, have an ‘important’ white-collar job, drive an expensive car, have a corner office, get promoted, etc).
AI could already do most of the work of CEOs and middle-managers. These people don’t add any actual value to the bottom line, they just appear busy and trade on seniority with the org structure.
AI will come up with an entirely different org structure.
Policy always lags behind technology unfortunately. Hard not to do when the average age in the UK is 40 and average age of lawmakers is about 55. Even worse in the states where average American age is 39, and average lawmaker age is around 61.
But this is a long pet peeve with me. The standard work week should have long since been decreased from 40 hours to match productivity. We should already have a standard 30 hour work week and the change will be even worse as productivity ramps up with AI.
why would malevolent governments mitigate the effects of making citizens more vulnerable to extortion and control? A desperate populace is a malleable populace so can always shift the blame to some convenient scapegoat.
So the question was “Why aren’t governments doing something?” and my answer is.. they are but you’re not gonna like it.