
The Agentic Shift: What L&D Leaders Must Do in the Next Six Months
A Year Is a Long Time in AI
There is a line that keeps circulating among technology observers, often misattributed, that goes something like: people overestimate what can happen in a month and underestimate what can happen in a year.
It has never felt more accurate than right now.
Twelve months ago, the conversation in most organisations was still primarily about generative AI as a productivity tool — prompt it, get something useful back, repeat. It was a significant shift, but a comprehensible one. You could still see the human in the loop at every step.
That is changing. In 2026, as Cameron Hedrick, CLO at Citi, noted in conversation with IKN CEO Sarah Clarke, the question is no longer about AI as a tool. It is about AI as a system — one that increasingly makes decisions, executes tasks, and operates with degrees of autonomy that the prompt-based era did not prepare us for.
What Agentic AI Actually Means
Agentic AI refers to systems that can take sequences of actions, make decisions, and complete multi-step tasks with minimal human direction. Rather than waiting for a prompt, they operate toward goals. They can access information, use other tools, communicate with other systems, and adjust their approach based on what they encounter.
In some companies and labs, this is already visible. In most organisations, it is arriving faster than leaders realise. The knee of the curve — the point at which exponential progress becomes impossible to ignore — is closer than most current planning horizons account for.
“Once this thing takes off with agents and proxies, it’s going to be something,” Cameron observed. “Most people still experience linear and predictive AI. But man, once agents really come online, it’s going to fundamentally change the dynamic — and with that comes a change in self-conception, leadership style, the way we train, the way we do culture.”
What L&D Leaders Are Not Ready For
Three things, specifically, that most L&D and HR leaders are underestimating:
1. The speed of the transition. The shift from prompt-based to agentic AI will not feel gradual to those inside organisations. For most leaders, it will feel sudden — because the signs were already visible in labs and early deployments, but easy to discount as ‘not quite here yet.’
2. The insecurity problem. If you have built your identity and career on knowing something — on being the expert in the room — and a system can now match or exceed that knowledge instantly, the psychological challenge is real. This is not a small thing to navigate, and pretending it is not happening does not help teams or leaders move through it.
3. The question of psychological safety in hybrid teams. Human-machine collaboration is no longer a theoretical future state. It requires the same conditions that high-performing human teams require — clarity, trust, and an environment where experimentation is safe. Building that is a genuinely new capability for most organisations.
The L&D Function Is About to Change Fundamentally
Cameron’s thesis on this is specific and worth sitting with: the L&D department as we know it will need to evolve from a content and program delivery function into something closer to an intelligence architect.
In practical terms, this means L&D working much more closely with recruiting — which will increasingly be responsible for bringing in all forms of intelligence, not just human talent — and with culture, which governs the conditions under which that intelligence is deployed. The three functions together create a new kind of learning ecosystem.
If you are an L&D leader who is still primarily thinking about content libraries, completion rates, and program calendars, now is the time to start thinking differently.
Two Things You Can Start This Week
Given everything above, what does ‘acting now’ actually look like? Cameron is specific:
Build a real AI experimentation culture. Not a structured pilot. Not a governance committee. A genuine invitation for people to use AI tools in their work, try things, fail fast, and share what they’re learning. The organisations that are building muscle here today will be significantly better positioned when the agentic transition accelerates.
Start using AI in structured reflection. This is underutilised and surprisingly powerful. Use AI to ask teams better questions: What were the implications of the choice you made? What assumptions were you operating under? What might the downstream effects look like across different timeframes? The goal is not to produce answers but to build the cognitive habits that support better judgment over time.
The Sensing Systems Question
One more dimension that L&D leaders need to understand: the infrastructure layer. As Cameron framed it, the most effective L&D functions of the near future will require leaders who understand their sensing systems — the inference and diagnostic tools that tell you what skills exist in your organisation, where they are, and how good they are. Then your content creation systems. Then the delivery mechanisms that get the right information to the right person at the moment of need.
You do not need to be a programmer. But you do need to understand architecture. If that sentence is uncomfortable, it is worth sitting with that discomfort now — because it is not going away.
The agentic shift is not a distant scenario. It is the next chapter. And the L&D leaders who will matter in that chapter are already asking different questions.
