Wait until the BoDs figure out AI is a perfect replacement for CEOs not the labor under them
tc100292 on
It’s because dipshits in the C-suite signed stupid deals for AI tools and then told the employees to figure out what to actually do with them.
LorthNeeda on
It’s a useful tool for certain things. It’s simply not the game changer they all want it to be.
pimpeachment on
As someone who heavily use AI at work this is just feeding the delusion that Ai isn’t a major factor for the future of work. People who use Ai will replace people who don’t. It’s like refusing to use the Microsoft suite because you have “principles” .
mvw2 on
AI is like a toaster in your kitchen. It has a pretty small function in the total scope, but it works very well at its task when used correctly.
If AI was used in the way it’s functional for, all would be well.
But CEOs and AI companies are trying to turn that toaster into a chef, a waiter, a dish washer, a manager, a restaurant owner, etc. They’re trying to make AI do everything and trying to sell the idea that it CAN do everything and that it will save you so much money if you’d just fire all your staff. Let that toaster manage the business, do your taxes, cook your foot, serve customers, clean up the place, etc., etc. And this is the grand lie being peddled to all.
Now AI can be tuned to do other task. It can be highly specialized to cook well, to clean well, to do taxes, to perform many very specific tasks. But that AI tool is only good at that task. It’s no longer a toaster. It’s no longer anything else.
Now you start bundling a pile of AI tools together. Hey look, it can toast, but it can also make eggs, cook a steak, serve people, etc., but they’re all mash of many small AI tools. In a way, we’re building the equipment, the utensils, the itemized steps of any processes, and for each and every tiny part, AI can be good, but singularly good.
The downside is two-fold.
Once amassed back together, it’s still a really, really big model simply because each tool has to become incredibly specialized to be remotely competent and reliably competent. Will it get better? Eh…slowly. Some want to argue AI is improving leaps and bounds, and it is. But it’s because of the optimizations and packaging, learning what AI can and can’t do and tuning. You will see some rapid, seemingly large changes with these big brush strokes, but it won’t stay at this pace. The big improvements are fast and based on those big fundamental changes. The fine tuning work to build reliability and consistency will be tiny in comparison. The grand improvements are kind of done. Now you will only see improved specializations, which is great. You just won’t see big evolutionary changes. There isn’t even any more data to use. To get where we are now we’ve already fed the significant bulk of humanity into these systems. It’s just the micro work left. And worse that this is none of this makes it smaller.
The second downside is ignorance. AI is only reliably used if the outputs can be vetted. This means any user of AI needs to be more knowledgeable and experienced than the work being asked. The user needs to know the correct answer before AI is asked the question. Anything less than this is use through ignorance. When placed into any business environment, ignorance only does harm. That ignorance will destroy a business. And as these high experience, very knowledgeable people retire out of the work force, no one will be there to replace them. The loop closes, and all that’s left is complete and total ignorance full-circle. This is the fundamental danger of AI as a tool because it is not capable of understanding what it does, and it will happily error with tremendous confidence. If you can not recognize the error, you will take it all at face value and run with it.
tiboodchat on
If there’s anything that the past year thought me, it’s that humans are absolutely capable of convincing themselves that the very thing they see is in fact not what they see.
Make it what you want.
Ihor_90 on
I’m just tired of senior management shoving useless AI “tools” down my throat. But now they’re also worried about AI spend lol.
So they expect 3 things:
– higher productivity
– increased AI adoption tracked through metrics
– fewer tokens spent cause turns out they’re expensive
Make that make sense.
americanfalcon00 on
there is another way to read this headline: businesses making progress on AI implementation are not talking about it. they are waiting until they can get enough scale to kill their competitors.
and i guarantee you these successful companies are not the ones using AI as an excuse to fire thousands of people.
pinkpugita on
To add, sometimes AI is the cheap way disguised as innovation, when the necessary innovation is just more expensive.
My company still uses old software because it couldn’t bother to spend on a system upgrade. Learning to use it is a massive barrier for new employees and doesn’t give long term benefits in skills. The old software even stopped working on some computers, and the employee had to use someone else’s terminal to gather the data needed.
Now, they are introducing Microsoft Co Pilot for “efficiency,” but it won’t have the same impact as actually updating the dinosaur software.
Trix_Are_4_90Kids on
AI is a support tool, it won’t take the place of human beings, that is a pipe dream.
redvelvetcake42 on
The problem is what it CAN do requires a LOT of human input. It’s great at doing mundane tasks and making several things less time consuming. But it NEEDS you, the human, to set up the guardrails and define the project and results needed. You can’t just replace an entire staff with AI. It won’t know what to do and it won’t be catered to your needs unless you directly train it.
But now you have to pay a ever increasing subscription fee to a third party instead of having employees.
WitnessMe0_0 on
Since my friend’s company started to push real hard for AI adoption, the number of software outages increased significantly with engineers spending unpaid overtime to fix the crap. Also they keep firing engineers and have some junior contractors scratch their heads while mission critical applications go dark for hours.
Classic_File2716 on
AI can be useful , but it can’t do everything.
BreathSpecial9394 on
Now this is a great article, MySQL in Rust 2000 times worse? That takes the cake.
nasolodakim on
yeah because stacking lego flowers is peak ai sophistication
nasolodakim on
Yeah the hype train’s about to derail spectacularly
technicalanarchy on
There are plenty of Ais out there doing high level brass tacks work with great success, but it’s not the LLMs. I have a feeling something else will come along and LLMs will be end up like the laser disk was.
The nature of the LLM is they are made to be slightly random to very random.
There is no consistency, every promt consumes much stuff and tokens and water and energy and the models change often and the policies change often and greatly. How is anyone considering transferring a business over with that track record.
There has to be something more efficient, less draing on computational and physical resources with a longer memory.
singlecell_organism on
I see around me at work and everyone’s successfully using AI in useful ways. I don’t see this often, but I don’t know if this is to a specific industry. I’m a lot faster at coding by guiding cursor along. also fixing artwork, debugging computer issues, communicating better with email.
It’s not the the 1000000000x ai hypers say. But for how early the tech is I find it incredibly useful.
ProfessorPickaxe on
I work for a tech company and I’ve been asked to AI-ify a bunch of stuff.
A lot of it is kind of nonsensical use cases for SaaS systems, which have highly optimized user interfaces to present structured data in an efficient way. But I’ve been asked to put AI on top of that so people can ask an AI to explain things they could easily see in the user interface. It doesn’t make any sense.
XanXic on
I know redditors don’t typically read past the headline but it’s kind of funny how this is a pro-AI article people are assuming is anti-ai. It’s an interview with a co-founder of an AI advisory service basically saying ‘people aren’t totally using it right, but we can figure that out’.
It’s talking more about coming up with a better way to measure AI’s usefulness than the traditional ways we measure work output. Again, something I’m sure the company the interviewee works for will gladly help you figure out.
They do make some salient points about the limitations of LLM’s like it being non-deterministic makes it unreliable at a foundational level. But it seems like something they still believe can be worked around with the right ‘metrics’.
not_old_redditor on
Why is it always about coding
cynicismrising on
It’s always worth looking at what it takes to run AI models locally. Some fairly good coding models can run in 16-32GB of video memory. With 128GB of video memory you can get into the -100B parameter models.. 128GB usually means Amd’s strict halo or Apples M* Max machines.
Although if you want to run the really big ones a cluster of 3-4 256GB – 512GB mem Apple studios are in your future. Or if you have millions nvidia will sell you a complete server set.
TheDuke2300 on
So reality wins again?
jabbadahut1 on
I recall a Bojack Horseman episode where a vacuum cleaner is the CEO.
poopmaester41 on
Reckoning soon…lol. These corporations will flush us all down the drain if it meant they could prove to themselves another crackpot idea they bet on had an inkling of chance to “succeed”—with success being capital generated rather than being a good thing.
danuffer on
Turns out it’s the prompters that are the useless ones.
Halcyon520 on
You are offered 10 million pay out and your regular salary for the rest of your days. All you have to do is convince your boss your job can be fully automated by AI.
I can’t think of a single job that can be fully automated at this point. Everything requires serious levels of supervision and a human level of accountability if something goes wrong.
I don’t know how this will develop but we sure are in the lower part of the hockey stick in this exponential curve for a while now
doxxingyourself on
Of course it doesn’t. It works well in coding and customer service because the output is verifiably correct or incorrect. As soon as you start the guessing game on decisions that goes out the window.
gated73 on
My belief after talking to some folks both client side and services side – neither side knows how to use it. Services firms like to say things like “AI Forward” or “pioneers in AI” but they think entirely in terms of workflow. Clients know they have to get AI, as it’s the new hotness, but struggle to understand its capabilities or where/how to deploy it for solid ROI and end up with fancy RPA or a chatbot+ and tell their friends how they’re leveraging AI.
Clean_Bake_2180 on
As the $2T private credit bubble unwinds and companies refi the separate $1T in debt that had 2-3% interest to 6-7%, enterprises will start taking a hard look at their cost structure and cancelling AI projects. The turd will then roll downhill onto the hyperscalers and finally Nvidia. Another way the turd can start rolling is consumption collapses. The top 10% is 50% of consumption but as layoffs accelerate and stock declines destroys the wealth effect, lower consumption will also compress corporate growth and margins. All roads lead to the same place.
SayNoToFirefighters on
why is your cock out bro :s
GrudenLovesSlurs on
I was able to use it for two useful things today that saved me probably 30 minutes. Nice to have, but for the hundreds of millions of dollars my firm is likely investing in AI that is not the return they are looking for.
Qasatqo on
AI is a legitimately amazing toy and a good, useful clerical work tool.
I use it to make meme reaction pictures and format word docs.
But I can’t see it be useful in actual business. Everything it can do in terms of organising work, I can do better, faster, and much easier, because it’ll take me far fewer tries and corrective edits.
jokikinen on
The pessimism about AI in this sub is hopium. If you really believe that the tools do not make things more productive, you simply haven’t used the newest ones for long enough.
Things are moving fast so ‘best practices’ are not pinned down, it’s true. It’s fine to defer investment on those grounds—and others. But stating things like AI won’t be good enough for coding or content generation just isn’t true. The step up from last fall made it good enough. Maybe not as good as all humans are—but good enough to get the business value out.
If you are discounting the tools entirely, you are discounting your chances for having success in this field.
You don’t have to go with all the hype, nor be the most proficient with all the latest tools all the time. But you should not risk counting out these tools entirely. It’s as naive as believing all the hype.
whatsitcalled4321 on
Ah, but let’s shove it into everything, especially weapons of war.
3xc1t3r on
Maybe people have too high hopes and expectations, certainly in some industries that it will be groundbreaking. But for my own workflow, and probably most of my colleagues it has increased the speed and definitely made some tasks a lot easier and less time-consuming.
Prototyping, testing new ideas, brainstorming, visualising ideas etc has probably made us not hire that extra person that we probably would have otherwise.
But is it groundbreaking yet? No.
All-the-pizza on
Reckoning (soon)… AI bubble burst! (Soon)… 🙄
justforkinks0131 on
I really dislike thwse sensationalist articles. what do you mean by “still”? It hasnt even been adopted to the extent where it could make a difference yet. It is still the early days of adoption.
In my company, literally only adoption is keeping AI from delivering on the promised cost reductions.
mrcsrnne on
Had dinner with my stepdad who is a big schmuck CEO for a global company the other day:
”So what kind of AI strategy is implemented in your company?”
”…well…people are still kind of figuring out how and why to use it.”
And while I think that’s well and good, it’s just so telling. I feel like AI is like VR glasses. It can be greeat tech but not at all as wide usecase as the execs thought from the beginning.
41 Comments
Wait until the BoDs figure out AI is a perfect replacement for CEOs not the labor under them
It’s because dipshits in the C-suite signed stupid deals for AI tools and then told the employees to figure out what to actually do with them.
It’s a useful tool for certain things. It’s simply not the game changer they all want it to be.
As someone who heavily use AI at work this is just feeding the delusion that Ai isn’t a major factor for the future of work. People who use Ai will replace people who don’t. It’s like refusing to use the Microsoft suite because you have “principles” .
AI is like a toaster in your kitchen. It has a pretty small function in the total scope, but it works very well at its task when used correctly.
If AI was used in the way it’s functional for, all would be well.
But CEOs and AI companies are trying to turn that toaster into a chef, a waiter, a dish washer, a manager, a restaurant owner, etc. They’re trying to make AI do everything and trying to sell the idea that it CAN do everything and that it will save you so much money if you’d just fire all your staff. Let that toaster manage the business, do your taxes, cook your foot, serve customers, clean up the place, etc., etc. And this is the grand lie being peddled to all.
Now AI can be tuned to do other task. It can be highly specialized to cook well, to clean well, to do taxes, to perform many very specific tasks. But that AI tool is only good at that task. It’s no longer a toaster. It’s no longer anything else.
Now you start bundling a pile of AI tools together. Hey look, it can toast, but it can also make eggs, cook a steak, serve people, etc., but they’re all mash of many small AI tools. In a way, we’re building the equipment, the utensils, the itemized steps of any processes, and for each and every tiny part, AI can be good, but singularly good.
The downside is two-fold.
Once amassed back together, it’s still a really, really big model simply because each tool has to become incredibly specialized to be remotely competent and reliably competent. Will it get better? Eh…slowly. Some want to argue AI is improving leaps and bounds, and it is. But it’s because of the optimizations and packaging, learning what AI can and can’t do and tuning. You will see some rapid, seemingly large changes with these big brush strokes, but it won’t stay at this pace. The big improvements are fast and based on those big fundamental changes. The fine tuning work to build reliability and consistency will be tiny in comparison. The grand improvements are kind of done. Now you will only see improved specializations, which is great. You just won’t see big evolutionary changes. There isn’t even any more data to use. To get where we are now we’ve already fed the significant bulk of humanity into these systems. It’s just the micro work left. And worse that this is none of this makes it smaller.
The second downside is ignorance. AI is only reliably used if the outputs can be vetted. This means any user of AI needs to be more knowledgeable and experienced than the work being asked. The user needs to know the correct answer before AI is asked the question. Anything less than this is use through ignorance. When placed into any business environment, ignorance only does harm. That ignorance will destroy a business. And as these high experience, very knowledgeable people retire out of the work force, no one will be there to replace them. The loop closes, and all that’s left is complete and total ignorance full-circle. This is the fundamental danger of AI as a tool because it is not capable of understanding what it does, and it will happily error with tremendous confidence. If you can not recognize the error, you will take it all at face value and run with it.
If there’s anything that the past year thought me, it’s that humans are absolutely capable of convincing themselves that the very thing they see is in fact not what they see.
Make it what you want.
I’m just tired of senior management shoving useless AI “tools” down my throat. But now they’re also worried about AI spend lol.
So they expect 3 things:
– higher productivity
– increased AI adoption tracked through metrics
– fewer tokens spent cause turns out they’re expensive
Make that make sense.
there is another way to read this headline: businesses making progress on AI implementation are not talking about it. they are waiting until they can get enough scale to kill their competitors.
and i guarantee you these successful companies are not the ones using AI as an excuse to fire thousands of people.
To add, sometimes AI is the cheap way disguised as innovation, when the necessary innovation is just more expensive.
My company still uses old software because it couldn’t bother to spend on a system upgrade. Learning to use it is a massive barrier for new employees and doesn’t give long term benefits in skills. The old software even stopped working on some computers, and the employee had to use someone else’s terminal to gather the data needed.
Now, they are introducing Microsoft Co Pilot for “efficiency,” but it won’t have the same impact as actually updating the dinosaur software.
AI is a support tool, it won’t take the place of human beings, that is a pipe dream.
The problem is what it CAN do requires a LOT of human input. It’s great at doing mundane tasks and making several things less time consuming. But it NEEDS you, the human, to set up the guardrails and define the project and results needed. You can’t just replace an entire staff with AI. It won’t know what to do and it won’t be catered to your needs unless you directly train it.
AI is a scam.
AI = UI replacement + python automation + error prone answers.
But now you have to pay a ever increasing subscription fee to a third party instead of having employees.
Since my friend’s company started to push real hard for AI adoption, the number of software outages increased significantly with engineers spending unpaid overtime to fix the crap. Also they keep firing engineers and have some junior contractors scratch their heads while mission critical applications go dark for hours.
AI can be useful , but it can’t do everything.
Now this is a great article, MySQL in Rust 2000 times worse? That takes the cake.
yeah because stacking lego flowers is peak ai sophistication
Yeah the hype train’s about to derail spectacularly
There are plenty of Ais out there doing high level brass tacks work with great success, but it’s not the LLMs. I have a feeling something else will come along and LLMs will be end up like the laser disk was.
The nature of the LLM is they are made to be slightly random to very random.
There is no consistency, every promt consumes much stuff and tokens and water and energy and the models change often and the policies change often and greatly. How is anyone considering transferring a business over with that track record.
There has to be something more efficient, less draing on computational and physical resources with a longer memory.
I see around me at work and everyone’s successfully using AI in useful ways. I don’t see this often, but I don’t know if this is to a specific industry. I’m a lot faster at coding by guiding cursor along. also fixing artwork, debugging computer issues, communicating better with email.
It’s not the the 1000000000x ai hypers say. But for how early the tech is I find it incredibly useful.
I work for a tech company and I’ve been asked to AI-ify a bunch of stuff.
A lot of it is kind of nonsensical use cases for SaaS systems, which have highly optimized user interfaces to present structured data in an efficient way. But I’ve been asked to put AI on top of that so people can ask an AI to explain things they could easily see in the user interface. It doesn’t make any sense.
I know redditors don’t typically read past the headline but it’s kind of funny how this is a pro-AI article people are assuming is anti-ai. It’s an interview with a co-founder of an AI advisory service basically saying ‘people aren’t totally using it right, but we can figure that out’.
It’s talking more about coming up with a better way to measure AI’s usefulness than the traditional ways we measure work output. Again, something I’m sure the company the interviewee works for will gladly help you figure out.
They do make some salient points about the limitations of LLM’s like it being non-deterministic makes it unreliable at a foundational level. But it seems like something they still believe can be worked around with the right ‘metrics’.
Why is it always about coding
It’s always worth looking at what it takes to run AI models locally. Some fairly good coding models can run in 16-32GB of video memory. With 128GB of video memory you can get into the -100B parameter models.. 128GB usually means Amd’s strict halo or Apples M* Max machines.
Although if you want to run the really big ones a cluster of 3-4 256GB – 512GB mem Apple studios are in your future. Or if you have millions nvidia will sell you a complete server set.
So reality wins again?
I recall a Bojack Horseman episode where a vacuum cleaner is the CEO.
Reckoning soon…lol. These corporations will flush us all down the drain if it meant they could prove to themselves another crackpot idea they bet on had an inkling of chance to “succeed”—with success being capital generated rather than being a good thing.
Turns out it’s the prompters that are the useless ones.
You are offered 10 million pay out and your regular salary for the rest of your days. All you have to do is convince your boss your job can be fully automated by AI.
I can’t think of a single job that can be fully automated at this point. Everything requires serious levels of supervision and a human level of accountability if something goes wrong.
I don’t know how this will develop but we sure are in the lower part of the hockey stick in this exponential curve for a while now
Of course it doesn’t. It works well in coding and customer service because the output is verifiably correct or incorrect. As soon as you start the guessing game on decisions that goes out the window.
My belief after talking to some folks both client side and services side – neither side knows how to use it. Services firms like to say things like “AI Forward” or “pioneers in AI” but they think entirely in terms of workflow. Clients know they have to get AI, as it’s the new hotness, but struggle to understand its capabilities or where/how to deploy it for solid ROI and end up with fancy RPA or a chatbot+ and tell their friends how they’re leveraging AI.
As the $2T private credit bubble unwinds and companies refi the separate $1T in debt that had 2-3% interest to 6-7%, enterprises will start taking a hard look at their cost structure and cancelling AI projects. The turd will then roll downhill onto the hyperscalers and finally Nvidia. Another way the turd can start rolling is consumption collapses. The top 10% is 50% of consumption but as layoffs accelerate and stock declines destroys the wealth effect, lower consumption will also compress corporate growth and margins. All roads lead to the same place.
why is your cock out bro :s
I was able to use it for two useful things today that saved me probably 30 minutes. Nice to have, but for the hundreds of millions of dollars my firm is likely investing in AI that is not the return they are looking for.
AI is a legitimately amazing toy and a good, useful clerical work tool.
I use it to make meme reaction pictures and format word docs.
But I can’t see it be useful in actual business. Everything it can do in terms of organising work, I can do better, faster, and much easier, because it’ll take me far fewer tries and corrective edits.
The pessimism about AI in this sub is hopium. If you really believe that the tools do not make things more productive, you simply haven’t used the newest ones for long enough.
Things are moving fast so ‘best practices’ are not pinned down, it’s true. It’s fine to defer investment on those grounds—and others. But stating things like AI won’t be good enough for coding or content generation just isn’t true. The step up from last fall made it good enough. Maybe not as good as all humans are—but good enough to get the business value out.
If you are discounting the tools entirely, you are discounting your chances for having success in this field.
I think this blog gives a good perspective on it:
https://www.ufried.com/blog/not_left_behind/
You don’t have to go with all the hype, nor be the most proficient with all the latest tools all the time. But you should not risk counting out these tools entirely. It’s as naive as believing all the hype.
Ah, but let’s shove it into everything, especially weapons of war.
Maybe people have too high hopes and expectations, certainly in some industries that it will be groundbreaking. But for my own workflow, and probably most of my colleagues it has increased the speed and definitely made some tasks a lot easier and less time-consuming.
Prototyping, testing new ideas, brainstorming, visualising ideas etc has probably made us not hire that extra person that we probably would have otherwise.
But is it groundbreaking yet? No.
Reckoning (soon)… AI bubble burst! (Soon)… 🙄
I really dislike thwse sensationalist articles. what do you mean by “still”? It hasnt even been adopted to the extent where it could make a difference yet. It is still the early days of adoption.
In my company, literally only adoption is keeping AI from delivering on the promised cost reductions.
Had dinner with my stepdad who is a big schmuck CEO for a global company the other day:
”So what kind of AI strategy is implemented in your company?”
”…well…people are still kind of figuring out how and why to use it.”
And while I think that’s well and good, it’s just so telling. I feel like AI is like VR glasses. It can be greeat tech but not at all as wide usecase as the execs thought from the beginning.