AI L&D capability mapping
L&D frameworks for an AI-augmented world. Not tool training — capability architecture. Knowing what to automate, what to augment, and what to protect.
Service banner image
The problem
Most enterprise AI training is backwards.
Organisations are running "how to use ChatGPT" workshops and calling it capability building. That's like teaching someone to use a calculator and calling it mathematics education.
Section image
What I do
How this works in practice
Each engagement is tailored to your situation, but these are the building blocks I work with.
Capability architecture
Mapping which skills should be automated, which should be augmented by AI, and which should be protected from AI altogether. The Automate/Augment/Protect framework.
Learning design
Building training programmes that use AI as a scaffold, not a substitute. A scaffold helps you build capability — a substitute replaces it.
Metacognition frameworks
Teaching teams to think about their own thinking in an AI-augmented environment. When to trust the AI output, when to push back, how to maintain critical judgement.
Measurement & evaluation
Most L&D metrics measure consumption, not capability. I build evaluation frameworks that tell you whether people are actually getting better, not just completing modules.
AI policy for learning
Governance frameworks for how AI should and shouldn't be used in learning contexts. Where it helps, where it hinders, and where the line is.
Section image
Who this is for
Is this right for you?
Who I've worked with
Brands and agencies I've done this for



















The framework: Automate, Augment, Protect
This is the core of how I think about AI in learning. Automate the procedural knowledge AI can handle reliably. Augment the complex work where AI makes humans better without replacing the human contribution. Protect the capability-building where the cognitive effort IS the learning — critical analysis, creative problem-solving, ethical reasoning.
Common questions
Things people ask me
Is this just AI training?
No. AI training teaches people to use tools. This is about redesigning how organisations learn and build capability in an AI-augmented world. Tools are part of it, but not the point.
How long does a typical engagement take?
Framework design takes typically 6-8 weeks. Implementation support varies by organisation size. I also run intensive workshops for leadership teams (1-2 days) to align on strategic approach.
Do you deliver the training yourself?
I design the framework and build programme architecture. For delivery at scale, I work with internal L&D teams or recommended partners. For leadership workshops and board sessions, I deliver directly.
What makes this different from other AI L&D consultants?
Most AI L&D focuses on tool adoption. This focuses on capability architecture — keeping people sharp, building genuine capability, not tool dependency. The Automate/Augment/Protect framework gives decision-making structure, not just a training plan.
See it in action
Related case studies
Personalised L&D content generation
Designed a real-time L&D content framework for Mondelez that generates personalised, always-current rich media education content — delivered across any medium and tailored to individual learner preferences through digital avatars.
Read case study →Gamified L&D with AI character generation
Built a gamified open-world learning framework for LEGO using custom AI datasets and living LoRAs of their main characters — operated as a full AI production pipeline by a single person.
Read case study →Related thinking
From the journal
Designing for thinking: a framework for AI-augmented learning that doesn't make us worse
15 February 2026
The rush to summarise: what enterprise L&D loses when AI reads for us
8 February 2026
What should live in your head when AI remembers everything?
1 February 2026
Want to talk about ai l&d capability mapping?
Most of my work starts with a conversation. No pitch deck, no hard sell — just an honest look at where this can make a difference for you.