Modern AI is a master of the confidence trick.
It can sound authoritative. It can simulate reasoning. It can produce answers that appear thoughtful, contextual, even strategic. But as Professor Michael Wooldridge argued in his 2026 Royal Society Michael Faraday Prize Lecture—“This Is Not the AI We Were Promised”—appearance is not understanding.
Today’s AI systems can perform impressively while lacking any true comprehension of what they are doing. They mimic reasoning without possessing it. And that distinction between fluency and understanding is not philosophical. It is legal, operational, and consequential.
The Royal Society’s Faraday Prize Lectures have long invited us to reflect on scientific breakthroughs and their societal impact. Michael Faraday himself used his famous Christmas Lectures to explain complex principles—like electricity—through accessible demonstrations. He illuminated forces that were invisible but transformative.
We are at a similar inflection point with AI.
Do we truly understand what we are deploying? Or are we dazzled by the light it emits—without grappling with the deeper mechanics driving it?
Generative and agentic AI systems do more than retrieve information. They interpret prompts, synthesize language, and influence decisions. Language carries meaning. Meaning drives action. And action carries consequences. When outputs are probabilistic rather than deterministic, the gap between comprehension and consequence widens.
That gap is not merely a technical nuance. It is a governance challenge.
As AI systems become more autonomous, we are forced to confront a foundational question: When the machine makes the call, who holds the liability?
The End of Predictability
For decades, we treated software like a predictable machine. If it broke, there was a bug in the code or a flawed logic gate. You could trace the error to a human decision.
Modern AI has broken that model.
These systems don't just follow instructions; they interpret intent. They respond to new information and produce results that are often as surprising to their creators as they are to their users. AI isn't uncontrollable, but it is no longer a simple tool. It is an agent.
Organizations are rightly racing to capture the speed and intelligence AI offers. But to do so safely, we must move past the "set it and forget it" mentality. We need a Digital Leash.
Applying the Digital Leash
A Digital Leash isn't a "kill switch" designed to stifle innovation. It is an accountability framework designed to protect it. At its core, the leash consists of three non-negotiable boundaries:
1. Strict Scoping
Agentic systems should never have unlimited freedom. We must define the sandbox. Deciding where AI is allowed to operate—and where it is strictly forbidden—is the first step of responsible leadership.
2. Hard Guardrails
Autonomy is not the same as discretion. We must implement technical controls that prevent actions outside of approved parameters, bolstered by consistent human oversight. The machine may suggest the path, but a human must own the destination.
3. Radical Visibility
If you can’t see what the AI is doing, you can’t govern it. Organizations must be transparent with customers and stakeholders about where AI is used and how decisions are reached. Without transparency, there is no accountability.
As I have said before:
“We should trust AI — once we have the right processes for verification and governance. We operate in a dynamic learning environment, and our AI systems and agents need guardrails, not blind faith, if they are to earn the trust we extend to our people.”
Trust is not assumed. It is engineered.
The Foundation of Sustainable Innovation
At EDB, we believe that trust, openness, and sovereignty are the prerequisites for the AI era. We provide organizations with the visibility and control they need over their data and infrastructure.
Innovation thrives when control is built into the foundation, not bolted on as an afterthought. You wouldn't release a powerful, unpredictable animal into a crowded park without a leash; we should treat autonomous AI with the same level of caution.
Professor Wooldridge’s lecture reminded us that there are limits to the trust we can place in AI itself. Clear governance does not restrict innovation, it protects it. If we fail to build the digital leash now, misplaced confidence will eventually produce real-world consequences. And when it does, responsibility will not belong to the machine. It will belong to us.