Day 19: Run Win/Loss Reviews That Actually Improve Your Win Rate
The Concept
Every closed deal — won or lost — contains data. Not the kind your CRM captures, which is mostly timestamps and stage labels, but the real data: what the prospect said when they were nervous, the moment the conversation shifted, the objection that came up at the wrong time, the thing that made them say yes. That data lives in your memory and your notes for a few weeks after the deal closes, and then it fades. If you don't extract it deliberately, it's gone.
Most reps don't run win/loss reviews for the same reason they don't do anything that requires structured reflection: there's no habit, and there's no urgency. A lost deal hurts for a day and then you move on. A won deal becomes a celebration and then you move on. Either way, the learning opportunity sits uncaptured until the same pattern shows up six months later in another deal and costs you again.
Why Standard Review Questions Don't Work
If you've ever been part of a team win/loss review process, you've probably seen how it usually goes. Manager asks: "Why did we lose this one?" Rep says: "Price was an issue" or "They went with the incumbent." Manager nods. The call ends. Nothing changes. These surface-level explanations feel like analysis but aren't. Price is almost never the real reason — it's the explanation a prospect uses when they don't want to say the real thing. "Went with the incumbent" is an outcome, not a cause.
Good review questions push past the first answer. Why did price become an objection at the end rather than earlier in the cycle? What would have had to be true earlier for price to not be the issue? Was there a moment in discovery where you could sense the decision was going somewhere else? The difference between surface-level and substantive review is following each answer with one more question — which is exactly what AI is designed to do.
Getting Honest Feedback From Lost Prospects
The single best source of win/loss intelligence is the prospect who didn't buy from you. Most reps never go back to ask. It feels awkward, maybe a little desperate. But a simple, non-agenda-driven message — sent 30 days after the decision, when the dust has settled — works more often than you'd think. Prospects respect the professionalism of someone who asks to learn rather than asks for a second chance. And the feedback they give, when they do respond, is worth more than any internal analysis.
The key is in the framing. You're not asking them to reconsider. You're not asking what you could have done to win. You're asking what mattered most to them in the decision, and whether there's anything you should understand for the future. That's a genuine question, and it gets genuine answers.
The Compound Effect of Consistent Review
A single win/loss debrief gives you one data point. Run them for a month — even informally — and you start to see patterns. The deals where discovery was shallow tend to stall at proposal. The deals where you got to the economic buyer early close faster. The deals that start as inbound from a specific source have a different close rate than outbound. These patterns are invisible in any single deal and obvious across twenty. This is the compound return on the habit — and it's the thing that separates reps whose win rate steadily improves from those who close at the same rate for years.
Prompt of the day
Copy this into your AI tool and replace any bracketed placeholders.
Prompt
I want to run a structured win/loss debrief on a recently closed deal. I'm going to share what happened at each stage, and I want you to help me extract the real lessons — not surface-level takeaways, but the underlying patterns that might be driving my results. Here is the deal: Outcome: [WON / LOST] What I sell: [YOUR PRODUCT OR SERVICE] Prospect: [COMPANY TYPE / ROLE — anonymise if needed] Deal size and length: [APPROXIMATE VALUE AND CYCLE LENGTH] What happened at each stage: - First contact / prospecting: [HOW IT STARTED AND WHAT THE FIRST IMPRESSION WAS] - Discovery: [WHAT CAME OUT, WHAT DIDN'T, HOW THE PROSPECT ENGAGED] - Demo or proposal: [HOW IT WENT, WHAT THEIR REACTION WAS] - Objection handling / late stage: [WHAT CAME UP AND HOW YOU HANDLED IT] - Final decision: [WHAT THEY TOLD YOU AND WHAT YOU THINK WAS REALLY GOING ON] Any direct feedback from the prospect: [QUOTE THEM IF YOU CAN — EVEN PARAPHRASES COUNT] What I think drove the outcome: [YOUR OWN HONEST HYPOTHESIS] Please produce a structured debrief covering: 1. What went well — be specific, not generic 2. What went wrong or was missing — same 3. One thing to replicate in the next deal 4. One thing to change in the next deal 5. A hypothesis about the real underlying driver of the outcome — something I might not have named myself
Your 15-minute task
Pick a deal that closed in the last 60 days — won or lost, both are valuable. Fill in every stage with real details from memory, your CRM, or your notes. Be honest in your own hypothesis field — that tension between what you think happened and what AI surfaces is where the insight lives. Save the debrief output with the deal name and date.
Expected win
A structured debrief on one real deal with a specific hypothesis about what drove the outcome — plus one replicable pattern and one concrete change, giving you a named improvement to carry into your next deal.
Power user tip
If the deal was lost, follow up with: 'Draft a short, genuine message I could send to the prospect 30 days after the decision — not to re-open the deal, but to ask for honest feedback and leave the door open for the future. It should be two sentences maximum and feel like a confident professional, not someone who is chasing.'