Last week, I hardened my AI infrastructure. Firewalls. File permissions. Port bindings. Access controls.
And here's what surprised me: the AI handled it brilliantly — as long as I set clear expectations and established boundaries.
That's the pattern I keep seeing. Not just in my setup, but in the research, the case studies, the conversations with leaders navigating this transition.
Autonomy without alignment is chaos. But autonomy with structure? That's when things get interesting.
As a leader in insurance operations at a Canadian commercial insurer, I'm learning agentic AI by building with it — not just reading about it. This newsletter is my way of translating what I'm seeing into language that helps business leaders and anyone who desires to be the 5% keeping up with the world of agentic AI.
Let's get into it.
📡 THIS WEEK'S SIGNAL
xAI Joins SpaceX to accelerate Humanity’s future - SpaceX
SpaceX announces the acquisition of xAI, forming what they call "the most ambitious, vertically-integrated innovation engine on (and off) Earth" — combining AI, rockets, Starlink, direct-to-mobile communications, and X (formerly Twitter). The primary mission: deploy space-based data centers to solve AI's unsustainable terrestrial power and cooling demands.
Why it matters: This marks a fundamental shift in thinking about AI infrastructure constraints. While the industry debates data center energy consumption and cooling challenges on Earth, xAI and SpaceX are proposing to sidestep terrestrial limits entirely by moving compute to space — where solar power is abundant and cooling is solved by vacuum. For business leaders, the implications cascade: if frontier AI development moves to space-based infrastructure controlled by a single vertically-integrated entity, what happens to competitive access? How do cloud providers respond when their hyperscale advantage becomes obsolete?
🔗 Link to the title: SpaceX - Updates
The automation curve in agentic commerce - McKinsey Insights
Agentic AI is increasingly part of shopping, but not all transactions will be automated the same way. McKinsey maps the "automation curve" — what agents will handle versus situations that demand human involvement.
Why it matters: Risk-averse doesn't mean change-averse. Thoughtful adoption is still adoption. The companies that win won't be the ones that automate everything — they'll be the ones that automate the right things and keep humans where judgment matters.
🔗 Link to the title: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-automation-curve-in-agentic-commerce
Show HN: A MitM proxy to see what your LLM tools are sending - Hacker News
A new tool (Sherlock) lets you inspect what your LLM tools are actually sending behind the scenes. Think of it as a security camera for your AI workflows.
Why it matters: Transparency builds trust. If you're deploying AI agents in your organization, you need visibility into what they're doing. This is the kind of practical tooling that moves AI from "interesting experiment" to "enterprise-ready."
Gemini Flash vs. Claude Opus: A Tetris showdown - Hacker News
TetrisBench pits AI models against each other in Tetris. Gemini Flash hit a 66% win rate against Claude Opus.
Why it matters: This isn't about Tetris. It's about decision-making speed, pattern recognition, and adaptability under constraints — skills that translate directly to real-world agentic tasks. When models compete on standardized benchmarks, we learn what they're actually good at beyond marketing claims.
🔗 Link to the title: TetrisBench | AI Model Comparison
🔧 WHAT I'M BUILDING
This week: security hardening.
My AI assistant (Barnaby, an AI research partner running on OpenClaw — formerly ClawdBot) had access to my isolated server, my pre-selected files, my telegram app with dedicated bot account. While I did everything right to isolate access, the ports were wide open. The firewall was off. Everything was bound to 0.0.0.0.
So I locked it down:
• Enabled UFW firewall (deny all incoming except SSH)
• Rebound web apps to localhost only accessible via tailscale service mesh
• Tightened file permissions on credentials and config with secrets vault
• Verified Telegram access controls
• Disabled unnecessary services
The result? Barnaby handled every step flawlessly. Read the security audit. Identified risks. Proposed remediations. Executed them. Same levels as enterprise-grade systems.
The lesson? AI agents are incredibly capable — when you set clear boundaries.
Give them guardrails, and they thrive. Give them vague instructions or unlimited access, and you're asking for trouble.
This isn't just a technical insight. It's an organizational one. The leaders who figure out how to set boundaries without stifling autonomy will have a serious edge.
💬 ONE QUESTION
What's the one guardrail your team should put in place before deploying AI agents — but probably hasn't yet?
Share with me. I read everything or my AI assistant will.
Agency is a weekly newsletter about navigating the agentic economy with resilience, curiosity, and — well — agency. Written by a Canadian insurance senior leader who's learning by building, not just reading.
Edition #002

