Skip to main content
Ascend
GET EARLY ACCESS

AI Isn’t Failing. Your Enablement Is.

6 minute read

The skills gap is an activation problem, not an intelligence problem

Ascend Team

Senior Content Strategist

Content creator at ascend, exploring the intersection of AI, marketing, and the future of work.

The tools are there. ChatGPT, Claude, Gemini, Copilot - all available, all powerful, all sitting unused or underused across your organisation. You've got the licenses. Maybe you've done the training. And yet adoption stalls. Usage drops. The ROI never materialises.

Companies are investing more than ever – according to Ramp, AI contract sizes have grown from $39K in 2023 to over $500K in 2025. And the products are working: retention rates have climbed from 50% to over 80%. But adoption still dipped in late 2025, broad across industries and company sizes. If the tools were the problem, retention would be falling. It's not. If companies weren't committed, contract sizes would be flat. They're not.

The tech is delivering. The investment is there. So where's the gap? It's the people layer - the enablement, the fluency, the scaffolding that turns access into activation. Ramp's own analysis confirms it: the next wave of adoption requires what they call 'implementation gains' - people figuring out the best use cases so others can follow. So the technology gets blamed. "It hallucinates." "It's not ready." "My use case is too complex."

But here's what over 70% of practitioners told us: the technology isn't the problem. The enablement is.

The Great Skills Gap

Most AI enablement looks like this: here's the tool, here's your login, here's some upfront training, good luck. Then it's hands-off. And that hands-off approach is exactly what leads to silos - walls between teams and divisions, people working in parallel, everyone charting their own path alone.

The assumption is that AI is intuitive enough that people will figure it out. Some will. Most won't. And the gap between those who thrive and those who struggle isn't intelligence or technical ability - it's exposure, practice, and permission to experiment. The gaps aren't complex: providing context, structuring prompts, breaking tasks into steps. But without the environment to practice and learn from others, these basics never become instincts. This skills gap isn't a gap in AI's capabilities. It's a gap in our collective ability to activate those capabilities through people.

One practitioner put it bluntly: "We gave everyone access to Claude. Six months later, maybe 10% were using it regularly. The rest tried it once, got a mediocre result, and went back to how they've always worked." Sound familiar? It's the same pattern everywhere - inside organisations, and out. Even if AI can amplify our capabilities, we haven't put the scaffolding in place to learn and apply it together.

"The people who get value from AI aren't smarter. They've just experimented more. Found a system. And compounded"

Why Training Alone Falls Short

The instinct is to fix the skills gap with more training, workshops, certification programs. And while these help, they miss the core root of the problem.

AI fluency isn't learned in a classroom. It's built through repetition, trial and error, seeing what works and what doesn't in your actual context, engaging in direct dialogue with others who are figuring it out too. And here's the compounding issue: even when someone figures it out, that knowledge stays with them. No one else learns. The next person starts from scratch. We all pay for the same learning curve over and over again.

Training gives people information. But it doesn't give them fluency. And fluency is what drives adoption - but that, for most, comes with repetition and learning from others. The best communities understand this - they create space to work, build, test, and share. But most people don't have access to those communities. And even the best ones sit outside your actual work.

The Abandonment Curve

Research shows that less than 10% of ChatGPT users even visit another LLM provider. People stick with their first tool - even if it's not the best fit. Why? Because switching costs aren't just financial. They're cognitive. Learning one tool is hard enough. Learning to evaluate and choose between tools? That's a job in itself.

This results in an abandonment curve. Someone tries an AI tool, gets a mediocre result, and concludes "AI doesn't work for me." They're not wrong about the result. They're wrong about the cause - the setup, the context, what's actually needed to get the most from it.

The cause isn't AI's limitation. It's the absence of guidance on how to use it well.

According to a16z, only 1 in 9 consumers actually pay for an AI model. Not because the value isn't there - but because most people never unlock enough value to justify the cost. They abandon before they activate.

"AI adoption doesn't fail at the rollout. It fails at the second attempt."

What Activation Actually Needs

Closing the skills gap isn't about more licenses and training. It's about creating the right conditions for activation - whether you're an individual building your own capability or an organisation scaling across teams.

Exposure: People need to see what good looks like. Not generic examples or gated playbooks that worked for someone else's context - real workflows from people like them, solving problems they recognise.

The challenge isn't finding time - it's finding signal. Feeds full of 'comment for my playbook' promises. Courses behind paywalls. Templates that don't translate. People aren't failing because they're not trying - they're failing because what's available doesn't meet them where they are.

Permission: Experimentation requires safety. If people fear looking stupid or wasting time, they won't try. The individuals and organisations winning at AI adoption have made experimentation an expectation, not an exception.

Feedback loops: Learning accelerates when you can see what worked and why. Not just for yourself - but from others. When someone figures out a better way to use AI, is that captured? Or does it stay locked in their head, lost when they move on?

This is the shift from enablement as an event to enablement as an environment. Not a one-time training, but an ongoing system that helps people build fluency by doing - together.

Closing The Gap

The people and companies winning at AI adoption aren't using better models. They're building better enablement. They've stopped treating AI like software to learn and started treating it like a skill to cultivate. They've built environments where experimentation is normal, where learning compounds, where activation is the goal - not just access.

This is why communities have exploded - people searching for a home to learn alongside others. But communities alone aren't the fix. They're external. They're fragmented. They don't integrate into your actual work. The real shift is building this environment where you work - where experimentation compounds across your context, not just in a Discord server you check on weekends.

For leaders, the question isn't 'what AI tool do we need now?' or 'why aren't I seeing returns?' It's this: are you measuring adoption or just access? How easy is it for someone to go from 'I tried AI once' to 'I use AI daily'? If the answer is "they figure it out themselves," you don't have an AI strategy. You have a lottery.

For individuals, the same question applies. What's your system for building fluency? If it's just "scroll, save posts, hope something sticks" - that's not a system. That's noise.

The skills gap is another area where AI adoption quietly dies. Close it, and you don't just get higher usage - you get compounding capability.

The skills gap is just one piece of the puzzle. But even when people build fluency, they're often building it alone - reinventing what someone else already figured out.

The goal isn't to copy someone else's path. It's to start from their learning and build forward. To take someone else's charted territory as a foundation, add your context, and create something new. That's how capability compounds. That's what most of us are missing.

Next: why AI is still single-player, and what multi-player looks like in 2026.

Thanks for reading The Experimenter's Library! Subscribe to receive new posts weekly.