>Mass automation has happened before, at the start of the Industrial Revolution, and some people [sincerely expect](https://www.thomsonreuters.com/en/insights/articles/benefits-of-artificial-intelligence-ai) that in the long run it’ll be a good thing for society. (My take: That really, really depends on whether we have a plan to maintain democratic accountability and adequate oversight, and to share the benefits of the alarming new sci-fi world. Right now, we absolutely don’t have that, so I’m not cheering the prospect of being automated.)
>But even if you’re more excited about automation than I am, “we will replace all office work with AIs” — which is fairly widely understood to be OpenAI’s business model — is an absurd plan to spin as a jobs program. But then, a $500 billion investment to eliminate countless jobs probably wouldn’t get President Donald Trump’s imprimatur, as Stargate has.
>
Background-Watch-660 on
Have the government send UBI checks to everyone as essentially one big productivity dividend on the entire economy; gradually increase the payout to let humans work less and enjoy more leisure time as technology gets more advanced.
I don’t know why you all are taking so long to figure this out.
dasdas90 on
Stargate won’t be able to achieve this. These guys are hoping by having a lot of GPUs their models will get better, which is why wishful thinking. Unless someone finds some kind of new revolutionary theory, we are not going to get AGI.
AI will certainly reduce the amount of employees needed to certain things (like programmers) but it won’t be able to replace them.
Lumix19 on
I’m curious: what does it mean for an AI system to act independently and commit serious crimes? What is an example here?
Is it possible for AI to rewrite itself in a way that it would pose a threat to the political and billionaire class?
Because I would not be that upset if that were to happen.
4 Comments
From the article
>Mass automation has happened before, at the start of the Industrial Revolution, and some people [sincerely expect](https://www.thomsonreuters.com/en/insights/articles/benefits-of-artificial-intelligence-ai) that in the long run it’ll be a good thing for society. (My take: That really, really depends on whether we have a plan to maintain democratic accountability and adequate oversight, and to share the benefits of the alarming new sci-fi world. Right now, we absolutely don’t have that, so I’m not cheering the prospect of being automated.)
>But even if you’re more excited about automation than I am, “we will replace all office work with AIs” — which is fairly widely understood to be OpenAI’s business model — is an absurd plan to spin as a jobs program. But then, a $500 billion investment to eliminate countless jobs probably wouldn’t get President Donald Trump’s imprimatur, as Stargate has.
>
Have the government send UBI checks to everyone as essentially one big productivity dividend on the entire economy; gradually increase the payout to let humans work less and enjoy more leisure time as technology gets more advanced.
I don’t know why you all are taking so long to figure this out.
Stargate won’t be able to achieve this. These guys are hoping by having a lot of GPUs their models will get better, which is why wishful thinking. Unless someone finds some kind of new revolutionary theory, we are not going to get AGI.
AI will certainly reduce the amount of employees needed to certain things (like programmers) but it won’t be able to replace them.
I’m curious: what does it mean for an AI system to act independently and commit serious crimes? What is an example here?
Is it possible for AI to rewrite itself in a way that it would pose a threat to the political and billionaire class?
Because I would not be that upset if that were to happen.