
Fresh Gallup data from 23,700+ US workers paints a complicated picture.
50% now use AI on the job. Three years ago it was 21%.
But here's the tension:
- 65% say AI made them more productive individually
- Only 10% say it fundamentally transformed how their organization works
- 18% fear their job gets automated away within 5 years
- That number rises to 23% at companies that have already adopted AI
So we're in this strange middle phase – widespread adoption, real individual gains, but no evidence yet of the systemic transformation everyone predicted.
The firm-level productivity data (from a separate NBER study cited in the Gallup report) shows CEOs reporting minimal effect on company-wide productivity over the past three years.
Individual wins. Company-level flatline. Growing fear underneath.
Source: https://www.gallup.com/workplace/704225/rising-adoption-spurs-workforce-changes.aspx
Where do you think we are in the actual adoption curve – early innings or approaching the real inflection point?
Half of America now uses AI at work – but only 1 in 10 says it's actually changed how work gets done
byu/MaJoR_-_007 inFuturology

34 Comments
If you’re using it at work, then literally, it changed how work gets done. That’s like a tautology.
Fundamentally this is how every sweeping technological change has occured in the workplace, from steam power and electrification to computerization.
There’s scattershot, inconsistent adoption across industries, firms, divisions of firms, and individual workers as everyone tries to figure out how this actually applies to their workflow, their project goals, their division, their industry, etc.
The vast majority of individual actions and systems won’t work out and will fall to the wayside, but the best and most effective systems will scale up and become the standard across numerous areas.
The first computers didn’t fundamentally transform how work was done, they replaced specific tasks in specific industries, then scaled, then spread, then scaled again.
Because most of them just use it to edit their email or summarize stuff. Not the core work of most people. Programming is the exception.
I’ve seen people use AI then edit 10% of it and say AI didn’t help at all.
# only 1 in 10 ADMIT it’s actually changed how work gets done
AI has greatly assisted me in my work but there’s still very much a human element needed in at least 90% if what I do
I highly doubt 50% use AI in the way you’re thinking. Polls are not very nuanced. Using grammerly is “using ai.” Clicking copilot and asking a question is “using ai.” There is probably <5% of America using AI how you think they are, in a maximally thoughtful way.
What I’m finding is that people had a baseline of “work” from years ago that is being met but not exceeded. Maybe it was before covid, maybe longer. And for *those* people ai isn’t magically making less work but getting people back to that manageable baseline. Productivity didn’t increase in all cases it was just shuffled around due to layoffs or a refusal to hire over the years so it doesn’t feel like a gain or a win. Though it will be interesting to see what happens with the *”I automated myself out of a job”* crowd, many of whom were the first adopters who were *super* excited about generating prompts and now things like *”context engineering”* where they create repositories of prompts and tools to do their job. Eventually even a senior engineer isn’t seen as “necessary” if the job only requires tweaking prompts and context files.
Half? Almost everyone at work uses it. Noticing how well everything is written now. Copilot in everyone’s Outlook is making emails and team meetings tedious LLM recaps now. Another deck made with Canva AI… cool.
No one wants to admit it either even with company policy being cool with it… What are the best use cases honestly I feel coding is the key area.
We’re right at the peak of one of the biggest tech bubbles of all time, with business idiots drinking the Flavor Aid of some of the worst flogs in history due to a total desperation on their behalf to prevent it from bursting.
You only have to see their reaction to the inevitable finding out part of their fucking around (the attacks on Altman last week) to see just how out of touch with reality they all are – nobody wants it, nobody likes it, and the more they try and force something they’re repeatedly claiming is going to end the world onto people the more events like that there will be.
We’re limited to using MS copilot which doesn’t hold a candle to Claude, Gemini, etc…I tried copilot to do some basic data manipulation in Excel and it failed terribly.
This is how most technological adoption works.
Slowly and then all at once.
First it’s copilot to edit your emails then it’s a full stack that you work with that’s plugged into internal corporate documentation (similar to Confluence Rovo)
I was able to get some pretty esoteric domain knowledge from a buried piece of documentation from 7 years ago using ai (we have one of the top 10 largest Confluence in the world) that I never would’ve found otherwise
Its just a glorified search engine for me at work. If I try to have it do anything too complicated it will sometimes just straight up lie
Some of it is just too early – my work for example, released a major update in January that impacts how folks are scheduled for engagements around the world. But two-three months worth of data is hardly enough time to say if it transforms how we work.
A lot of us are individually pursuing it because companies pushed out the requirement to use it without a strategy for how to use it to transform business. So charitably we could argue it helps individual employees save time with routine tasks like writing emails or managing calendars, but it’s hardly earth shattering work.
The one thing I think it has done is make a good quality substitute for voiceovers in the education we produce. We still write the entire script, but we have a reliable way to read it, without having to worry of employees leave, record badly, or having vocal tics that learners can be rude about in recording. Would I use it to narrate a book? No. But for micro learning under 30 minutes? Sure.
I think personally much of the hype around LLMs has overstated what they can do, unless you’re in certain fields. If pushed, I would say my most common use of it is de-angering email tone for office politics. And analyzing my ridiculous inbox to try and figure out what is urgent, what is five minute fixes, and what needs to be folderized for larger tasks.
Having spent the past year evaluating support assistance and “support replacing” tech we found incremental lift but not “replace 50, 30 or even 10%” of staff in deployment. The exception was if we were willing to take the accompanying enshitification of the customer experience as they fight their way through what is essentially AI hurdles to get many answers handled effectively. Some companies have and presumably will continue to sacrifice customer service quality in the name of “AI replaced people”. From our analysis current reasoning models are not capable of handling anything but the most trivial customer support interactions once complexity rose to a low tier human level quality of answers and customer satisfaction scores dropped precipitously. It may be that the models will improve to a tipping point of usefulness but its pretty clear they have a ways to go. Interestingly too the commercial models today would cost us money for “attempted call handling”. This makes a lot of solutions difficult to cost justify.
Software engineer and personally I find the C suite skew really aggravating.
I use it to generate code but I spend much more time researching things to make sure complex system designs hold water and/or use it to learn tools and libraries I have a tenuous grasp on.
Meanwhile the C suite just looks at the cost and says “wen moar code; want quadruple.”
For me the simple fact is I’m tackling things that maybe didn’t involve a ton of code but involved a lot of planning and research that previously could have taken weeks but now takes days. It’s a big boost to what my team can produce but it doesn’t “present” the same way coarse grained and error prone metrics like number of tickets or number of pull requests gets treated.
I mean I’ve been encouraged to use AI. But unless the input is restricted to what I need it just outputs whatever it finds online. So if it’s outside of my scope but I’m asked to include something I now have to verify if any of it makes sense or is applicable.
Itll probably get better. But for now I don’t trust the Facebook post regurgitator.
Oh wow, AI that’s very good at certain specific data related tasks has no effect on tasks unrelated to manipulating and searching large amounts of data. I’m shocked I tell you.
Those 1 in 10 are as productive as the other 9 combined. Going to be a messy year.
I work in tech/software. They’ve been try to force us to use it more in our day to day tasks but it’s just wildly inefficient. I ask it to perform simple tasks and it takes 1 – 1 1/2 hours to complete when I can do the same task in 10 minutes.
I mostly end up using it to find information for me so I don’t have to spend time scouring wiki pages for the answer.
Gallup is compromised. Hard to trust a measuring system once it starts to bend to outside influence
Most of my AI usage is trying to decipher nonsense AI code that someone else used to break something.
I have undertaken several great new projects at home in my spare time thanks to AI. It has definitely helped me unlock some frozen creativity.
I’m forced to use it at work and it is a pointless slog trying to find ways to integrate it into my work flow. I don’t need it for my day to day activities, and due to confidential/proprietary data restrictions most of our stuff can’t leave the premises anyway, so going out to someone else’s data centers is a no no.
And yet, every 1:1 with my boss, every week, the first question is not about our projects, but about how often I’m using AI on a daily basis.
Make it make sense. (It does make sense. It’s all a corporate dog whistle in preparation for layoffs in six months of everyone who “wasn’t on board with our new direction.”)
I use it to take notes on my zoom/teams calls so I’m not constantly looking at my other screen and typing loudly. It does a fairly decent job at that.
I use it to edit emails but if I try to do any complex analysis of data I can’t trust it.
because most people don’t know how to use it beyond a chat bot
It will be too late for most people when they realize just how good it has become.
People where I work are trusting it far too much, like watching a meeting recording and simply blindly trusting the notes.
I’ve found it missed quite a bit, it misinterprets information and whatnot.
It’s also really awful at doing things like technical story writing as well as answering relatively complex questions.
AI does simple things ok. It’s terrible at everything else.
As someone who is autistic, it really helps me work, because directions that are clear to neurotypical people often have multiple interpretations to me. So if I get overwhelmed with all the interpretations, I pull up ChatGPT and I’m like, “My boss told the me message below, what does he mean and what does he want me to do.”
ChatGPT then actually gives me a couple of paragraphs that let me interpret what my boss wanted. It’s only been wrong when there was objectively no way anyone could tell what my boss wanted. Like, I’ve had times where my boss says he wants me to look up laws on indigency, when really what he meant to ask was to find a pre-made motion of indigency and fill it out for an unnamed case. As long as it’s not directions like that, ChatGPT helps stop my 30 minute interpretation spiral I get when receiving directions.
9 in 10 are not using it properly if it hasnt changed how work gets done. Its a fundamental shift in what work even is (if youre in an office job).
I’ve used it here and there to create low priority/nice to have ‘mid level complex’ automation scripts that my company don’t deem important enough to have their actual automation teams work on. The ones where I know what I want, and enough to get it started, but stuck in areas and basically don’t have the time to dig through 100’s of docs. Probably takes more time than it should to ‘create’ them, but it saves me time over periods and contrary to what a lot of people say, if you actually pay attention, you can actually learn things as you go.
The last one I wrote (we use copilot) the thing straight up argued with me and kept patting itself on the back while being completely wrong. You certainly do need to know something about what you are trying to do or it could lead you completely down the wrong path. I find ChatGPT to be somewhat better in this aspect.
As others have pointed out, it can do a great job of clarifying/simplifying complex ideas that one might have trouble writing into coherent statements that people who know the bare minimum about the subject can understand. Us tech guys tend to speak in different terms and dive too far into the weeds for most people not in tech.
It’s also great for running by scenarios and asking it to make graphs/spreadsheets etc showing all the outcomes or possibilities.
I guess I just look at it like the calculator. I don’t NEED it, but when things get a bit complicated it’s helpful to use it without having to wrack your brain or search for the answer for hours.
The downside of this is management is towing the line about how AI is going to revolutionize the company. It’s not. But they don’t care. The people making these decisions don’t know anything about technology. And apparently they’ve forgotten that humans like to talk to humans.
AI is a productivity multiplier. A multiplier of 0 is still 0.
Yeah it’s a real game changer for writing emails to customers that validate their feelings but are also policy first.
This is snake oil being made to seem like the next technological revolution. The worse part are all the mouth breathing CEOs thinking this is going to shrink their costs down and put people’s lives in jeopardy through layoffs.
>no evidence yet of the systemic transformation everyone predicted.
I am not sure what you mean. 10% workplaces “fundamentally transformed” is *huge*.
But yeah, it’s just the beginning.