Last week, I did something interesting: I built the exact same capability in two completely different ways — one using corporate O365 tools, another using open source agentic AI.
And the differences reveal something I keep seeing in my research and conversations with leaders navigating this transition.
📡 THIS WEEK'S SIGNAL
AI companies want you to stop chatting with bots and start managing them - Ars Technica
The shift from "chat with AI" to "manage AI agents" represents a fundamental change in how organizations think about technology. It's not about conversation anymore — it's about orchestration and governance.
Why it matters: The companies that win won't be ones with the best models. They'll be ones with the best guardrails and governance. For ops leaders, the question isn't "which model should we adopt?" — it's "how do we give agents boundaries without stifling their autonomy?"
Sixteen Claude AI agents working together created a new C compiler - Ars Technica
Sixteen autonomous Claude AI agents collaborated to write a complete C compiler from scratch — no human developers, no traditional software engineering process. The agents specialized, communicated, and delivered working code.
Why it matters: This is what Tier 2 agentic AI looks like in practice. Not just "chatting with an AI," but autonomous agents coordinating across specialized roles to solve complex problems. For ops leaders, this raises two questions: What could your team build with autonomous agents? And how do you build the governance to trust them?
From guardrails to governance: A CEO's guide for securing agentic systems - MIT Technology Review
As organizations deploy more autonomous AI agents, the challenge shifts from "how do I control this one agent?" to "how do I govern an entire fleet of agents?" This guide from MIT Tech Review outlines a framework for building guardrails that scale.
Why it matters: This is the Tier 1 reality. You can't deploy autonomous agents without security, compliance, and auditability. The article makes a compelling case: Guardrails aren't about limiting AI — they're about enabling AI to operate safely at scale. The same principles I'm learning with Barnaby apply to enterprise AI governance.
Moltbook was peak AI theater - MIT Technology Review
The $500M "AI-powered" app didn't actually use AI at all — it was a chatbot. The story became a cautionary tale about AI hype vs. reality, and why organizations that over-promise and under-deliver lose credibility.
Why it matters: In an era where everyone's rushing to deploy AI, the companies that survive will be ones who are honest about capabilities, deliver real value, and avoid "theater" that creates cynicism. For ops leaders, this means: don't let vendors sell you vapor. Demand proof points.
Beyond the bot: Building empathetic customer experiences with agentic AI - McKinsey Insights
McKinsey explores how agentic AI can transform customer service from "reactive chatbots" to "proactive, empathetic experiences" — but only when built with the right guardrails and human oversight.
Why it matters: The most successful AI implementations aren't replacing humans — they're augmenting them. This article shows how agentic systems can handle routine tasks while escalating complex situations to humans with full context. That's the hybrid model: Tier 1 for compliance, Tier 2 for capability.
🔧 WHAT I'M BUILDING
This week, I deployed the exact same capability — Meeting Intelligence and Decision Logging — using TWO completely different AI architectures. Same objective, different paths.
Inside Corporate Walls: Tier 1 — O365-Only Approach
The constraint: I work in a regulated commercial insurance environment. No external tools. No unapproved software. Security and compliance override everything.
The build: Power Automate instant flow → Excel storage → Copilot Notebook query.
How it works:
1. I have a meeting (internal, external, or with partners)
2. After the meeting, I press a button on my PC (Power Automate workflow)
3. Form pops up: Meeting title, attendees, highlights, decisions made, action items
4. Flow writes everything to meetings.xlsx and decisions.xlsx in my OneDrive (corporate storage, audit-ready)
5. Copilot Notebook indexes the Excel files automatically
6. I ask: "What decisions did I make about [project]?" → instant answer with meeting context
The result:
• 100% compliant (all data stays in O365)
• Zero IT approval needed (I have the licenses)
• Proven ROI: This pattern saved me 1 hour/week on meeting notes alone
• Searchable: Natural language queries work perfectly
What's limited:
• I have to trigger it manually (button press after every meeting)
• It's reactive, not agentic (no autonomy)
• Confined to O365 ecosystem (no cross-tool orchestration)
• No automatic task creation or linking back to meetings
• Query-based, not proactive (I have to remember to query)
Outside Corporate Walls: Tier 2 — Open Source Agentic AI
The freedom: My personal server, my choice of tools, my rules.
The build: OpenClaw gateway + Barnaby (agentic AI) + Meeting Intelligence + Decision Logger + Mission Control.
How it works:
1. I have a meeting — could be with external parties, or with my AI agents on research, strategy, and work plans
2. Barnaby monitors my conversation and detects meeting signals autonomously
3. Barnaby automatically captures highlights, decisions, and tasks — no manual input required
4. Everything is organized in Mission Control: a unified dashboard where I (or my AI agents) can view, update, and search across all data
5. Information is tagged and fully searchable with cross-links: tasks show which meeting created them, decisions link back to the meeting context, everything connects
6. This is all done autonomously, with full memory and cross-session awareness — Barnaby remembers context from previous sessions
7. This paves the foundation for agentic workflows where AI agents can work autonomously with one another, creating, updating, and coordinating tasks without human intervention
The result:
• Fully autonomous (Barnaby captures meetings, decisions, and tasks without me triggering anything)
• Mission Control provides a unified view (tasks, meetings, decisions in one dashboard)
• Cross-tool orchestration (Meeting Intelligence ↔ Decision Logger ↔ Taskboard — everything links together)
• Cross-session awareness (Barnaby remembers context across days and weeks)
• Foundation for multi-agent collaboration (agents can coordinate autonomously)
What's impossible in Tier 1:
• Autonomy (Barnaby decides when to capture without asking)
• Automatic task creation and linking back to meeting source
• Mission Control unified dashboard (O365 doesn't provide this level of integration)
• Cross-session awareness (Barnaby searches entire history and remembers context)
The Trade-Off: Honest Reality
| Tier 1 (Corporate O365) | Tier 2 (Open Source Agentic) |
| --------------------------------- | ------------------------------------------------ |
| ✅ Compliant by design | ❌ Security risk for corporate data |
| ✅ Zero IT approval needed | ❌ Requires infrastructure (hosting, maintenance) |
| ✅ Works everywhere (any O365) | ❌ Learning curve (not for everyone) |
| ✅ Proven ROI (1 hour/week saved) | ✅ 10x+ productivity when it works |
| ✅ Familiar (everyone knows Excel) | ❌ Unfamiliar (few know OpenClaw) |
| ❌ Reactive (I trigger) | ✅ Autonomous (agents initiate) |
| ❌ Confined to O365 | ✅ Unlimited (any tool) |
| ❌ Query-based only | ✅ Agentic (chains decisions) |
The Insight I Keep Coming Back To
You don't choose ONE. You use BOTH.
This isn't about "corporate AI vs. personal AI" as some philosophical debate. It's about context-aware tool selection:
Corporate data, customer information, regulated workflows? → Use Tier 1. It's compliant, it works, it's proven.
Personal development, rapid prototyping, learning future capabilities? → Use Tier 2. It's agentic, it's autonomous, it's where the industry is going.
Pragmatic beats dogmatic every time.
What I'm Learning About Guardrails
After hardening Barnaby's security last week (firewalls, port bindings, access controls), I was nervous. Will this break things? Will it limit capabilities?
The opposite happened. Barnaby handled it brilliantly — as long as I set clear expectations and established boundaries.
The lesson: Autonomy without alignment is chaos. Autonomy with structure? That's when things get interesting.
Don't fight constraints. Work with them. But don't stop there.
💬 ONE QUESTION
What's one capability you wish you had at work that's currently blocked — and what would happen if you could just build it yourself without asking permission?
Share with me. I read everything or my AI assistant will.
Agency is a weekly newsletter about navigating the agentic economy with resilience, curiosity, and — well — agency. Written by a Canadian insurance senior leader who's learning by building, not just reading.
Edition #003

