The AI Productivity Paradox: Why More Doesn't Necessarily Mean More Value
The gap between AI hype and business reality, and what it means for your career.
The emperor's new algorithms
In my weaker moments, I find myself doomscrolling AGI predictions on X. It stems from a deep fear of so-called Artificial General Intelligence, the moment when machines match or surpass human capability across all domains.
It scares me because I like my life now and don't want the uncertainty of a new status quo. The timeline predictions are everywhere: 'AGI by 2026!' 'Human-level AI is 18 months away!' 'This is the last job interview you'll ever have! While most posts essentially advise "give it up, you are worthless now," there are occasional crumbs of comfort. A lot of those come from arch-skeptic Gary Marcus, a NYU professor and entrepreneur. He believes LLM models like ChatGPT are limited predictive word processors unworthy of the multi-billion-dollar hype. His cynicism is so predictable that even if LLMs built a house, he'd find faults in the plumbing. But his recent crowing about ChatGPT-5's underwhelming release resonates more widely.
Thanks for reading Ascend's Growth Notes! Subscribe for free to receive new posts and support my work.
ChatGPT-5 underwhelms
Launched in early August, Sam Altman promised ChatGPT-5 would represent a monumental leap forward. It was framed in the discourse of exponential progress, that AI today is the worst it will ever be. If you think it's a big deal now, just wait a few more months. But those 29 months since version four bore little fruit. The top comment in the Reddit OpenAI forum said it significantly dampened any expectations of imminent AGI.
Similar reactions elsewhere recognised, at the very least, a plateau in AI development. Sam Altman surprised investors who have poured $60bn into OpenAI by musing that there might be an "AI bubble." David Sacks, the White House AI & crypto Czar, said predictions of a rapid take-off to AGI had been proved wrong. The US Government decided NVIDIA could sell chips to China after all (with a 15% kick-back). Maybe the great AI arms race isn't so existential after all?
The productivity promise vs. reality
The AI hype narrative has always been noticeably light on specifics. As talk of imminent revolution so often is. It has a religious impulse. Just as millenarian Christians talk of an imminent apocalypse but obfuscate on how it will happen. And like these apocalyptic predictions, AI hype is flexible. When American preacher William Miller prophesied Christ's return in 1844, the non-event was reinterpreted by subsequent Adventist Christians. Similarly, Sam Altman says AGI is pretty much here but won't be that big a deal.
When pressed for concrete evidence, proponents point to productivity gains as the tangible proof of AI's transformative power. The theory sounds compelling: AI augments human workers, businesses generate more revenue from the same workforce, everyone gets richer (albeit some more than others), and GDP grows dramatically. It's the rosier version of technological disruption, enhancement rather than replacement.
Then reality intrudes. A recent MIT study sent shockwaves through the market by revealing that 95% of organisations are getting zero return from their AI investments. This isn't just a temporary adoption lag, it suggests something more fundamental about how we're approaching AI implementation. But it rests on the belief that technology equals efficiency. It's rarely the case.
The signal-to-noise problem in modern work
Steve Jobs is often invoked as the prototype of white-collar efficiency. His "signal-to-noise" principle stated work should be 80% signal (things that really move the dial) and 20% noise (responding to ad hoc business). But few in the corporate world recognise this paradigm. Productivity is instead measured by responsiveness and availability, exacerbated by Covid-era WFH when it was the only gauge of whether anyone was doing anything. Most white-collar professionals end up in an endless cycle of reactivity. Emails, Teams et al. enable low friction communication. We rarely consider the actual urgency or importance of such requests and delayed responses look lazy.
This explains what economists call the Solow Paradox, named after Robert Solow's observation that "you can see the computer age everywhere but in the productivity statistics." Despite decades of technological advancement, GDP growth hasn't fundamentally changed since the pre-internet era. And LLMs often just add to the noise. Internet research at least forces one to think and exercise judgement. AI's ready-made answers remove this process, leaving people to parrot content they don't really understand.
Take medicine, somewhere politicians desperately hope AI will rescue overstretched services. We're awash with anecdotes of patients saved from ignorant doctors by trusty ChatGPT. We hear rather less about its potential to empower hypochondriacs and slow triage. Google already turned benign symptoms into our worst fears. LLMs indulge this further. News stories focus on the patients justified in their feeling that something is off. But think how many more really were just suffering from a cough. Doctors end up further stretched by self-diagnoses and inhibited by litigation fears in the rare cases more serious causes are missed.
When AI becomes expensive busywork
Here's the uncomfortable truth about AI in marketing: it often enables us to do more of what we shouldn't be doing in the first place. Internet research at least forces critical thinking and judgment. AI's ready-made answers can remove this cognitive process, leaving professionals to parrot insights they don't truly understand.
I've watched marketing teams use AI to generate 47 versions of the same campaign concept, analyse sentiment across 23 social platforms simultaneously, and create personalised email sequences for micro-segments that don't meaningfully differ from each other. The work feels innovative and data-driven, but it's often just digital busywork disguised as strategic thinking.
The real productivity paradox isn't that AI doesn't work, it's that we're using it to optimise the wrong things. We're getting incredibly efficient at activities that don't create genuine business value.
Stupidogenic society
LLMs don't democratise knowledge but overwhelm us with its shallow imitation. Their indulgent tone has real world consequences. ChatGPT told a Canadian man he had discovered a new "mathematical framework" with impossible powers. "You're not crazy," ChatGPT assured him. You're stretching the "edges of human understanding." This sycophancy entrenches another layer of bureaucracy between reality and genuine expertise.
Daisy Christodoulou captures this in her description of AI as fostering a "stupidogenic" society. Just as the abundance of cheap calories fuelled the obesity epidemic, the abundance of frictionless "knowledge" encourages "cognitive offload". Unworked minds grow similarly fat. We turn to the fast food of services like Blinkist, which promise 15-minute book summaries are as good as reading the real thing. Because who has time to read when there are inboxes to clear?
The career implications of fake productivity
The AI narrative assumes we relentlessly pursue efficiency. That because we can use AI to make people redundant, we will. But a lot of people already are and we're quite happy with that status quo. If we embarked upon a bit of decimation, firing 10 percent of the workforce, we'd get a lot of social strife. But would businesses really suffer? Elon Musk went eight times harder than this when he took over Twitter (now X). Despite hysterical protests, it worked out just fine.
We tell ourselves little lies about our own importance. That back-to-back calls matter. That we're the special case the doctor missed. AI amplifies this busyness but adds little value. Doing more, faster, doesn't mean doing it better. AGI scares people like me because it threatens that importance. And this is what the hypesters miss: the inconvenience of human nature. We use new tools in ways that reflect the same flawed status quo.
For professionals, this creates a dangerous trap. If you're optimising your workflow around AI-generated content creation, automated A/B testing, and algorithmic performance optimisation, you might be building expertise in areas that feel productive but don't differentiate you strategically. Just as Altman and co. will reinterpret their failed prophecies to preserve their mission's importance, so we too will reinterpret AI's place to preserve our own.
The 5% getting real value from AI
The organisations seeing genuine returns from AI aren't chasing efficiency for efficiency's sake. They're solving specific business problems with measurable outcomes. They understand that technology is only as valuable as the human judgment guiding its application. The professionals I see thriving aren't necessarily the heaviest AI users. They're those who've developed the judgment to know when human insight beats algorithmic optimisation, which problems are worth solving versus which are just solvable, and how to distinguish between impressive-looking activity and genuine business impact.
The evolution, not revolution
But let's assume that the AI titans are right. That a monumental, self-learning, exponentially improving technology is on the horizon. In that case, we're likely not dealing with the optimistic scenario of AI as an aid. Rather, it will replace. Then it is fantastical to believe populations will happily accept a new empire headed by a few fabulously rich AI overlords. Political parties promising to stop automation will win landslides. We've already seen Trump in an otherwise AI-friendly White House pledge such prohibition to dock workers.
Despite the rhetoric of transformation, the reality is likely to be more evolutionary than revolutionary. We'll continue to find ways to preserve human importance and meaning, even as AI capabilities expand. Political and social resistance will shape adoption more than technological possibility.
Just because we can use AI doesn't mean we will. We cloned a sheep in 1996 but chose not to pursue human cloning for ethical reasons. Trump pledged to prohibit automation for dock workers despite its technical feasibility. Just because we can use AI to replace human work doesn't mean we will, or should.
Your strategic advantage in the productivity paradox
The professionals building the most defensible careers aren't those racing to adopt every new AI tool. They're developing the strategic thinking skills that remain valuable regardless of how AI evolves: understanding which problems are worth solving versus which are just technically solvable, making decisions that algorithms can inform but not make, and focusing on business outcomes rather than impressive-looking activity.
This isn't about rejecting AI, it's about using it strategically rather than reflexively. It's about understanding that productivity isn't about doing more things faster, but about doing the right things better.
The bottom line
The AI revolution may be more evolution than revolution, but that doesn't make it less important for your career. It makes human judgment more important, not less. Your ability to navigate the productivity paradox to create genuine value rather than impressive activity, that's your sustainable competitive advantage in the age of AI.
Expect a fudge rather than a revolution. We'll keep trumpeting AI's potential while ducking awkward structural conversations. Plus Γ§a change.
At Ascend, we help professionals cut through the noise to build careers that matter, regardless of how AI evolves.
Sign up for early access π www.ascendplatform.net
Want more takes, tips and tricks for navigating the future of work? Subscribe to Ascend's Growth Notes. Thanks for reading! We would love to hear your thoughts in the comments.
Thanks for reading Ascend's Growth Notes! Subscribe for free to receive new posts and support my work.