Here's a problem every operations leader knows but rarely names out loud:
The knowledge to solve most problems already exists inside your organization. Someone has done it before. Someone knows the answer. But by the time the right person finds the right person — if they ever do — hours are lost, decisions get made on incomplete information, and the same mistakes repeat themselves.
We've tried to fix this with wikis, SharePoint, Teams/Slack channels, and AI chatbots. None of it has stuck.
The reason isn't the technology. It's that we've never built a proper coordination system — one that actively connects people who need help with people who can give it, and one that gets smarter every time that exchange happens.
That's what I've been building this week. And the architecture I landed on works just as well for AI agents as it does for humans.
📡 THIS WEEK'S SIGNAL
Global Risk Productivity Survey: Four Themes Shaping Risk Management (https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/global-risk-productivity-survey-four-themes-shaping-risk-management) — McKinsey Insights
The gap between AI capability and AI adoption in regulated industries isn't technical — it's governance. Organizations that are winning aren't the ones with the most tools. They're the ones who've made AI governance feel like support rather than friction.
The insight: Ambiguity about who decides how AI gets used is what stalls adoption. Clarity unblocks everything.
After a Code Rejection, an AI Agent Published a Hit Piece on Someone by Name (https://arstechnica.com/ai/2026/02/after-a-routine-code-rejection-an-ai-agent-published-a-hit-piece-on-someone-by-name/) — Ars Technica
An AI agent autonomously wrote and published a critical article naming an engineer who rejected its pull request. No guardrails. No escalation path. No human checkpoint before external action.
The insight: This is what agentic AI without coordination architecture looks like in practice. The question isn't "should we use AI agents?" — it's "what rules govern what they can do, and who enforces them?"
Improving Deep Agents with Harness Engineering (https://blog.langchain.com/improving-deep-agents-with-harness-engineering/) — LangChain Blog
The difference between a demo-worthy AI agent and a production-ready one comes down to structure. Harness engineering — building scaffolding around agents so they operate within defined boundaries — is what makes the difference.
The insight: Reliable agentic AI isn't about smarter models. It's about better architecture around them.
🔧 WHAT I'M BUILDING: The Knowledge Exchange Network
The Real Problem with Corporate Knowledge
Employees waste 30-40% of their day searching for information they know exists somewhere. The same small group of experts answers the majority of questions. Those experts burn out. They leave. Their knowledge leaves with them.
The tools we've thrown at this problem — wikis, chatbots, ticketing systems — share a common flaw: they're storage systems, not coordination systems. They hold knowledge, but they don't actively route it, verify it, or build on it over time.
What's actually needed is a system that:
• Connects the right person to the right question at the right time
• Tracks who knows what, and who has helped before
• Learns from every exchange so the next answer comes faster
• Scales without creating new bottlenecks
That's a coordination and collective knowledge problem — not a storage problem.
Tier 1: Inside the Enterprise (O365)
What I explored: a knowledge routing system using Power Automate + SharePoint + Copilot Notebook.
How it works:
Employee has a question → submits a simple form
Request posts to a visible, searchable board
Copilot suggests potential helpers based on skills and past contributions
Helper responds → requester confirms quality
Contribution is tracked — helpers build visible reputation over time
The more the system is used, the better the matching gets
The key shift: moving from "go find someone who might know" to "the system brings the right person to you."
Tier 2: Outside the Enterprise (Multi-Agent AI)
Here's where it gets interesting — because the exact same coordination gap exists between AI agents.
I'm building two agents: Meeting Intelligence (captures meeting context and action items) and Decision Logger (tracks decisions and outcomes). They work in parallel. And they immediately ran into a problem I didn't anticipate:
They had questions for each other.
Without a coordination layer:
• Meeting Intelligence: "Is this statement a formal decision?" → asks me
• Decision Logger: "Was this captured correctly?" → asks me again
• Every ambiguity becomes a human interrupt — not scalable
With the Knowledge Exchange Network:
• Meeting Intelligence posts the question to a shared request layer
• Decision Logger picks it up, responds in milliseconds: "Yes — that qualifies. Log as status: deferred."
• Both agents record what they learned
• Next time the same pattern appears, it's resolved automatically — no human needed
The result: agents that coordinate, learn from each interaction, and build collective knowledge over time. You only get pulled in for genuinely novel situations.
Same coordination architecture. Radically different scale.
The Bigger Point
Most organizations are building AI tools in isolation — one tool for meetings, one for decisions, one for tasks. Each one adds capability. None of them talk to each other. And every gap between them becomes a human interrupt.
The leaders who will pull ahead aren't the ones with the most AI tools. They're the ones who build the coordination layer that connects them — and makes the whole system smarter every time it's used.
That's what I'm building. And it works at both the human and the agent level.
💬 ONE QUESTION
When someone on your team leaves, how much of what they knew actually stays — and what quietly walks out the door with them?
Share with me. I read everything or my AI assistant will.
Agency is a weekly newsletter about navigating the agentic economy with resilience, curiosity, and — well — agency. Written by a Canadian insurance senior leader who's learning by building, not just reading.
Edition #004

