Real-world example
Anthropic ran a focused “Fin hackathon” sprint to improve their AI Agent’s resolution rate. The team audited unresolved queries, identified underperforming topics, and created or updated content to close gaps. They converted frequently used macros into AI-usable snippets, monitored Fin’s performance during live support, and continuously refined content based on real interactions. This structured approach enabled rapid improvement while maintaining quality standards.
Governance isn’t extra overhead or red tape. It’s what makes improvement routine and safe. When the path from insight to action is predictable, your AI Agent gets better every week and your support system keeps scaling with it.
3. Build a system that learns by default
AI performance isn’t static, but most teams treat it like a one-time implementation. The most successful organizations design systems that learn: they analyze where the AI Agent struggles, then feed that insight directly into structured improvement.
That might look like:
- Reviewing common handoff points to humans.
- Tracking unresolved queries by topic or intent.
- Measuring resolution rate trends over time.
- Using these signals to prioritize fixes or content upgrades.
Whether you follow a formal loop (like the Fin Flywheel framework) or something simpler, the goal is the same: make improvement inevitable.
4. Treat content as competitive infrastructure
Your AI Agent is only as good as what it knows. This makes content strategy a competitive advantage, not just a support function.
You need to treat knowledge like infrastructure, where:
- Every topic has a clear owner.
- Content is structured, versioned, and ingestion-ready.
- New products ship with source-of-truth content by default.
- Changes are shipped on a schedule, not when someone finds time.
Real-world example
At Intercom, we’ve evolved our New Product Introduction (NPI) process by aligning early with R&D on a single, canonical source of truth that becomes the foundation for all downstream content – including what the AI Agent uses to resolve queries. By embedding content creation into launch readiness, not as an afterthought, we’ve consistently hit 50%+ resolution rates on new features from day one.
This infrastructure layer often determines whether teams scale confidently or stall out. Without it, every improvement is harder and AI performance remains inconsistent. With it, your AI Agent gets better every day – and the system compounds.
5. Make belief visible
Even the best system won’t keep improving if people stop believing in it. Belief will fade quietly if you don’t reinforce it.
Keep it strong by:
- Sharing specific wins regularly.
- Highlighting improvements with metrics.
- Recognizing the people behind those improvements and giving them space to lead.
This is about more than just team morale. It’s about keeping everyone aligned and excited about the bigger play you’re all part of.
Putting it all together
Building an AI-first support organization means having the right people and the right systems to support them.
When ownership is clear, iteration is safe, knowledge is reliable, and belief is visible, AI performance compounds. And as the AI Agent gets better, your entire support model gets faster and more scalable.
This is the foundation of a modern support organization.
Next week, we’ll take this one level deeper and explore how capacity planning changes when AI handles the majority of your work and your team moves into higher-value roles.
To follow along with the series and have each new edition emailed to you directly, drop your details here.

