Day 12: Turn Customer Feedback Into a Product Roadmap
The Concept
The most common way to build a product roadmap is to collect feature requests, sort them by how many customers asked for them, and build the most-requested items first. It feels logical. It is also one of the most reliable ways to build the wrong thing. Customers are excellent at describing their frustrations and poor at diagnosing the solution. When Henry Ford asked customers what they wanted, the most common answer was a faster horse. The horse was the wrong solution to the right problem. The job they were trying to do — get somewhere quickly — was real. The feature they requested — a faster horse — was a constraint masquerading as a solution.
The Jobs-to-be-done framework inverts this. Instead of asking what customers want, it asks what they are trying to achieve when they reach for your product. The insight is that people do not buy products; they hire products to do a job. When your product does that job better than any alternative — including doing nothing — they keep paying. When it does not, they churn, even if they cannot articulate exactly why.
How AI Turns Raw Feedback Into Actionable Patterns
The problem with customer feedback is not that there is too little of it. It is that it arrives in inconsistent formats — some in support tickets, some in interview transcripts, some in feature request forms, some in off-hand comments in sales calls — and extracting the pattern requires reading everything, holding it all in mind simultaneously, and noticing where themes emerge. This is genuinely difficult for a solo founder managing a business at the same time. AI can do this analysis in seconds.
Otter.ai and Fireflies.ai both record and transcribe customer calls automatically. After three calls, you have a combined transcript that represents real customer language — the exact words they use to describe their problem, their current workaround, and their ideal outcome. Paste that transcript into Claude with the prompt above and you get a pattern analysis that would take a human analyst half a day to produce. The output is not perfect, but it is grounded in what real customers actually said, which is more than most roadmaps can claim.
The RICE Framework for Prioritisation
Once you have a list of potential improvements, you need a way to decide which to build first. The RICE framework — Reach, Impact, Confidence, Effort — gives each feature a score that makes prioritisation explicit rather than political. Reach asks how many customers this affects in a given time period. Impact asks how much it moves the key metric for those customers. Confidence asks how sure you are about the reach and impact estimates. Effort asks how many weeks of work the feature requires. Divide the product of the first three by effort, and the highest-scoring feature should move to the top of the queue.
Linear is worth using for managing the output — it combines roadmap planning, sprint management, and issue tracking in a tool that is fast enough that engineers actually use it. The discipline of scoring features before committing to them forces the conversation about tradeoffs that most product teams avoid until it is too late.
The Feedback That Should Make You Pause
The most valuable signal in any batch of customer feedback is feedback that suggests you are targeting the wrong customer or solving the wrong problem. This often shows up as a pattern of customers using the product differently than you intended, a cluster of feature requests that describe a completely different use case, or churn notes that reveal the product solved the stated problem but not the underlying job. AI is particularly good at flagging this kind of signal precisely because it has no emotional investment in your current roadmap. It will tell you when the pattern suggests a repositioning opportunity rather than a feature addition, which is the kind of honesty that is hard to get from advisors who do not want to discourage you.
Prompt of the day
Copy this into your AI tool and replace any bracketed placeholders.
Prompt
You are a product manager trained in the Jobs-to-be-done framework. I am sharing feedback I have collected from customers: [PASTE CUSTOMER INTERVIEW NOTES, SUPPORT TICKETS, OR FEATURE REQUESTS — at least 5 examples]. My product currently does [DESCRIBE CORE PRODUCT]. Analyse the feedback and: 1. Identify the top 3 themes or patterns in what customers are struggling with. 2. Separate what they are asking for from the underlying job they are trying to do. 3. Suggest 5 potential product improvements ranked by likely impact, with rationale. 4. Flag any feedback that suggests I am targeting the wrong customer or solving the wrong problem. 5. Produce a prioritised roadmap for the next 60 days with 3 features or improvements.
Your 15-minute task
Record your next 3 customer calls with <a href="https://otter.ai" target="_blank" rel="noopener noreferrer">Otter.ai</a>. Paste the combined transcripts into Claude with this prompt. The roadmap it generates will be grounded in real customer language.
Expected win
A pattern analysis of your customer feedback, a prioritised 60-day product roadmap, and a clear distinction between what customers ask for and what they actually need.
Power user tip
After building your roadmap, follow up with: 'I have decided to prioritise [FEATURE A]. Write a one-paragraph internal brief explaining why we are building this, what problem it solves, and how we will know it worked.' This brief prevents scope creep during development.