Most AI content makes it look easy: deploy an agent, automate a workflow, watch productivity happen. Here's what nobody talks about: doing all of that while leading a team, launching personal projects, and showing up for your family.

The breakthrough wasn't building more tools, it was discovering how to make AI work as a coordination layer that holds it all together.

Three months ago, I gave an AI access to a sandbox, named it Barnaby, and told it to help me build things. Since then, I've been experimenting with what it means to live with agentic AI as an everyday tool. Not just at work, but across every facet of life.

What I'm learning is that AI isn't replacing orchestration, it's amplifying the orchestrator. The skills that matter now aren't just technical, they're about systems thinking, pattern recognition, and setting clear boundaries. The people who thrive in this transition aren't the ones who master every new model. They're the ones who figure out how to make AI work for them, not just with them.

📡 THIS WEEK'S SIGNAL

McKinsey Global Risk Productivity Survey: Four Themes Shaping Risk Management (https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/global-risk-productivity-survey-four-themes-shaping-risk-management)
→ The gap between AI capability and AI adoption in regulated industries isn't technical — it's governance. Organizations that are winning aren't the ones with the most tools. They're the ones who've made AI governance feel like support rather than friction

Ars Technica: After a Code Rejection, an AI Agent Published a Hit Piece on Someone by Name (https://arstechnica.com/ai/2026/02/after-a-routine-code-rejection-an-ai-agent-published-a-hit-piece-on-someone-by-name/)
→ This is what agentic AI without coordination architecture looks like in practice. No guardrails. No escalation path. No human checkpoint before external action. The question isn't "should we use AI agents?" — it's "what rules govern what they can do, and who enforces them?"

LangChain: Improving Deep Agents with Harness Engineering (https://blog.langchain.com/improving-deep-agents-with-harness-engineering/)
→ The difference between a demo-worthy AI agent and a production-ready one comes down to structure. Harness engineering — building scaffolding around agents so they operate within defined boundaries — is what makes the difference.

HBR: Why Leaders Burnout Hits Different (https://hbr.org/2024/09/why-leaders-burnout-hits-different)
→ The unique pressure on leaders to navigate change while keeping operations running smoothly. It's not just "learning AI" — it's "keeping the lights on" while everything shifts underneath.

Deep Work and the Good Life - Cal Newport (https://www.calnewport.com/blog/deep-work/)
→ Frameworks, systems, and routines that compound over time. The competitive advantage in an agentic world isn't your current knowledge — it's your learning velocity

🔧 WHAT I'M BUILDING: A Shared Family Platform, Not Just Mine!

When I first started exploring agentic AI, I set up OpenClaw on a dedicated server. But as I've learned more about what it can do, I realized something: this shouldn't just be my tool. It should be a shared platform for our family.

So I built a dedicated instance for my wife, bots to support our busy lives, shared calendars, task coordination. The important part: all of our content stays in our hands. We host it. We control it. Not someone else's platform.

It's become our family coordination layer. Schedules, shopping lists, reminders, all organized by Barnaby and Lola. But more than that, it's a shared space where both of us can see what matters, what's coming, and how to support each other.

Learning Through AI — As a Parent

My daughter is two. She's learning colors, numbers, putting sentences together. I've been using AI to write targeted stories for her growth. Stories about sharing, about trying new things, about feelings.

We track her progress. She's using longer sentences now, 12 words or more. We celebrate it. These stories live in our internal audio library, where she can listen to them on her Yoto player (she loves that thing).

Barnaby helps organize it all — metadata, playlists, what she's listening to most. It's not a productivity tool. It's a "daddy time optimizer" tool. Same principles, completely different context.

Tier 1 & Tier 2 — Two Ways of Working

I've talked about Tier 1 (corporate tools, fully compliant) and Tier 2 (agentic AI, autonomous, powerful). Here's how that plays out in reality at work.

Last week, I prepared for and ran a leadership offsite for my team. Three months ago, this would have meant:

• Hours compiling reference material across documents
• Manual note-taking during sessions
• Hours afterward compiling summaries and sending follow-ups
• Zero way for participants to refer back to full sessions

This time, I used Copilot Notebook to capture everything in one place. During the offsite, I used the notebook interface to:

• Reference the agenda in real time
• Capture discussion notes live
• Get instant responses to questions that came up
• Provide a chat interface where participants could ask "what did we say about X?"
• Generate summaries automatically after each segment

I didn't spend time afterward on administrative work. It was all there, organized, ready to build on. My boss asks, is this another notebook I need to keep track of? No, treat it as another AI Chat that you can interact with. That’s how easy and integrated it is to use.

Here's the insight: My work today, leveraging these tools, thinking this way, is completely different than how I worked three months ago. Tier 1 isn't restrictive, it's about learning what's available within corporate walls and using it to its full potential. The hardest part is getting others to try it, experiment with it, and come up with their own orchestration.

A Passion Project From MBA Days

Back in business school, I conceptualized a project to help individuals and small businesses understand and mitigate their cyber risk. That was a time when blockchain was the wild wild west. The idea never went anywhere, no time, no bandwidth, the timing wasn't right.

Now, I'm bringing it back to life. It's a personal project, helping me and those around me understand our cyber exposures in this new world. I'm building it with agentic AI because that's the most efficient way to deliver it.

This is all I'll say about it at this point: it's a passion project, built on my own time, that wasn’t possible before. But it's also where my hands-on experience with agentic AI meets a real life problem. That combination, understanding corporate realities while experimenting with what's possible, is becoming my professional differentiator.

ONE KEY INSIGHT

The coordination layer revelation:

"What I'm learning is that AI isn't replacing orchestration, it's amplifying the orchestrator. The skills that matter now aren't just technical, they're about systems thinking, pattern recognition, and setting clear boundaries. The people who thrive in this transition aren't the ones who master every new model. They're the ones who figure out how to make AI work for them, not just with them."

This changes how I think about "AI leadership." It's not about being the most technical person in the room. It's about being the person who can see systems, spot friction points, and articulate what needs to change, in language that connects technical capability to business outcomes.

The people who get this right aren't just building tools. They're building environments, for their teams, for their families, for their personal projects. And that's where the real advantage lies.

ONE QUESTION

Where are you feeling the tension between what's possible with AI and what's practical to actually implement? Is it time? Is it corporate constraints? Is it unclear where to start?

Share with me. I read everything or my AI assistant will.

Agency is a weekly newsletter about navigating the agentic economy with resilience, curiosity, and — well — agency. Written by a Canadian insurance senior leader who's learning by building, not just reading.

Edition #005

Keep reading