The first part of this post is directly copied from Product School to recap the CIRCLES framework. If you already know the framework, feel free to skip to the section where I run through an example for this.
The Seven Stages of CIRCLES
The CIRCLES Method is a powerful product design framework and checklist that ensures product managers move from vague objectives to justified, logical recommendations. Developed by Lewis Lin, it serves as a memory aid to prevent skipping critical steps in the design process.
- Comprehend the Situation: Before jumping to solutions, you must clarify the goal (e.g., revenue vs. engagement), understand constraints, and determine the context using the “3 W’s and H”: What is it? Who is it for? Why do they need it? How does it work?
- Identify the Customer: You must define a specific target audience and choose a single persona to focus on. This allows for deeper empathy and a solution that meets specific needs rather than a mediocre “all-in-one” device.
- Report Customer Needs: List the essential requirements using a User Story format: “As a [persona], I want [goal/desire] so that [benefit]”.
- Cut (Through Prioritization): Since you can do anything but not everything, you must prioritize use cases based on factors like feasibility, impact, and alignment with business goals.
- List Solutions: This is the creative phase where you brainstorm at least three to ten ideas. To truly innovate, you must “Think Big” and avoid “me-too” or simple integration ideas.
- Evaluate Trade-offs: Assess the pros and cons of each proposed solution, considering implementation complexity, cost, and impact on user experience.
- Summarize Recommendation: Provide a concise 20–30 second summary stating what you recommend, why it is beneficial, and why it was preferred over other options.
Putting CIRCLES into Action: The Agentic Research Framework
Product frameworks have a seductive quality. They promise structure where there’s ambiguity, completeness where there’s chaos.
But here’s what I learnt when trying to use it on a real world problem: they’re scaffolding, not architecture. I recently used CIRCLES to define a real product, an agentic research tool for R&D discovery, and the exercise revealed exactly where it was good and where it started to generate noise. I used CIRCLES with Claude to brainstorm through each step, so some of it could be where Claude got stumped vs some of it where I felt the framework was performative.
The Problem Worth Solving
Before diving into CIRCLES, here’s the pain point. R&D teams doing discovery with business partners spend enormous amounts of time researching context before they can even start solving problems. They’re hunting through Confluence for:
- Team structure and operating models
- Strategic priorities and roadmaps
- Past attempts and lessons learned
- System dependencies and technical constraints
- KPIs and success metrics
This research takes hours. Worse, critical information gets missed, constraints that only surface six months into a project when they derail everything. The goal: compress this from hours to minutes, and surface insights a human researcher might overlook entirely.
Where CIRCLES Actually Helped
Comprehend the Situation forced useful specificity upfront. Using the “3 W’s and H”: What is it, Who is it for, Why do they need it, How does it work, I had to articulate that this wasn’t just “better search.” It’s a human-augmented research agent that plugs into Atlassian’s Rovo Search MCP plugin, with a specific success metric: reduce research time from hours to under 40 minutes while surfacing insights the user couldn’t have found on their own.
That last part matters. “Faster search” is a feature. “Finds what you would have missed” is a product. This was such a moment of clarity for me, differentiating these two: the primary solve is still finding the right information, speed improvement can come later.
Identify the Customer prevented the classic trap of building for everyone and delighting no one. I listed three potential users: Product Managers, Research Strategists, Business Leaders, and forced myself to pick one: the Product Manager. This persona audits roadmaps across business units, manages cross-functional dependencies, and consistently gets blindsided by information buried in spaces they didn’t know to search.
Report Customer Needs translated vague pain into something actionable. Instead of “research is hard,” I got to specific user stories:
“As a Product Manager, I want to understand the team’s current friction points and operating model so that I can identify the most impactful problems to solve.”
The framework’s insistence on the “As a [persona], I want [goal] so that [benefit]” structure isn’t just interview theater. It forces you to connect the feature to the outcome. Why does the user need to understand friction points? Because without that context, they’ll propose solutions that don’t fit how the team actually operates.
Cut was straightforward once the user stories were clear. Among historical context, strategic alignment, and operational awareness, I prioritized operational awareness. Understanding how a team actually works is the foundation; everything else builds on it.
Where the Framework Started Generating Noise
Here’s where things got interesting. The List Solutions step asks you to brainstorm “at least three to ten ideas” and “think big.” In theory, this prevents anchoring on your first idea. In practice, it can produce filler.
The initial suggestions I got:
- Friction Heatmap: A visual dashboard analyzing Confluence edit history and comments to highlight operational friction.
- “Day-in-the-Life” Narrative Generator: An AI that synthesizes team pages into a walkthrough of their operating model.
These sound reasonable until you push on them. The heatmap: whose friction is it visualizing? The product manager’s research process or the business team’s operations? The narrative generator is even vaguer. “Synthesize disparate pages into a narrative” describes a capability, not a solution to the actual problem. These were completely disconnected from the problem I was trying to solve, so I pushed back. Either suggest something grounded in the user’s actual workflow, or let’s pursue one strong idea rather than padding the list.
This is where frameworks can mislead you. The instruction to generate multiple ideas is meant to expand your thinking. But if you’re not careful, it becomes a box-checking exercise where quantity substitutes for quality. Three mediocre ideas aren’t better than one good one, they’re worse, because they create the illusion of rigor.
Finding the Actual Solution
After the pushback, a better alternative emerged: the Risk & Contradiction Auditor, an agent that proactively finds conflicting information across teams, like when Team A’s roadmap depends on a system that Team B recently deprecated.
This was genuinely interesting. It addressed the “unknown unknowns” problem directly. But the implementation complexity was severe, requiring the AI to understand logical dependencies across different teams’ documentation.
The solution I landed on: the Agentic Query Researcher. Here’s how it works:
- Takes the user’s research query
- Asks for specific constraints
- Expands the query to include adjacent problems the user might not have thought to search for
- Confirms which Confluence spaces and organizations it will search
- User verifies, then the agent runs and produces a summarized report
This hits the core need: operational awareness of friction points, while maintaining user control. The expansion step is key: it doesn’t just find what you asked for, it identifies what you should have asked for. And the verification checkpoint before execution means the user stays in the loop rather than trusting a black box.
Evaluate Trade-offs (Where Frameworks Regain Value)
The Evaluate Trade-offs step redeemed itself here. Comparing the Agentic Query Researcher against the Risk Auditor:
Agentic Query Researcher
- Pro: High user control; catches adjacent problems; verification step builds trust
- Con: Summarization risks hallucination; quality depends on query expansion logic
Risk & Contradiction Auditor
- Pro: Surfaces unknown unknowns that cause project failures
- Con: Extremely high implementation complexity; requires deep semantic understanding of cross-team dependencies
The trade-off framework made the choice clear. The Risk Auditor is a better product if you can build it—but the “if” is doing a lot of work. The Agentic Query Researcher delivers most of the value at a fraction of the complexity.
The Recommendation
Build the Agentic Query Researcher. It compresses research from hours to minutes, surfaces friction points and value propositions for discovery conversations, and front-loads constraints that would otherwise derail projects later. Most importantly, it scales: you can extend the query templates to cover new research dimensions without rebuilding the core system.
What I Actually Learned
CIRCLES helped where it provided forcing functions: specificity about the problem, commitment to a single persona, translation of pain into user stories, and structured comparison of alternatives.
It hurt where it encouraged completeness for the sake of it: “brainstorm ten ideas” became an invitation to generate filler rather than pressure-test the one idea that actually mattered.
Frameworks are tools. They’re only as good as the judgment you bring to them.
Leave a comment