Day 20: Campaign Retrospective
The Concept
Most campaign retrospectives happen in one of two ways: not at all, because the next campaign has already started and no one has time; or as a meeting where the team agrees the campaign was mostly fine, notes a few things to do differently, and moves on without producing anything that changes the next campaign's brief. Either way, the institutional knowledge from the campaign — what worked, what did not, and crucially why — dissipates within weeks.
This is one of the most expensive habits in marketing. Every campaign contains genuine signal about your audience, your channels, and your assumptions. Losing that signal means starting the next campaign from the same position rather than from a higher base. Over time, teams that run structured retrospectives develop a compounding advantage over teams that do not: each campaign makes the next one marginally smarter, and that margin compounds.
Why retrospectives fail
The standard retrospective fails for structural reasons, not motivational ones. It is typically facilitated by the person most responsible for the campaign's outcome, which creates a status threat that subtly shapes what gets said. It focuses on what happened rather than on which decisions produced the outcome. It ends with a list of "learnings" that are too abstract to change anyone's behaviour — "we should plan earlier", "the brief needs to be clearer" — without identifying the specific decisions that will be made differently next time.
AI has no status to protect and no career risk attached to a campaign that missed its targets. It will identify the critical decision — the one choice made before or during the campaign that most affected the outcome — without the social friction that makes that conversation difficult in a team setting.
The critical decision framing
The most useful section in a retrospective is not what worked or what did not — it is the identification of the single decision that most affected the outcome. This is not the same as what went wrong. A campaign can miss its targets because of a decision that was entirely reasonable given the information available at the time — a targeting assumption that was logical but proved incorrect, a channel allocation based on previous performance that did not hold in this context.
Identifying the critical decision is valuable because it is specific enough to change. "We over-invested in LinkedIn relative to the audience size we were targeting" is a decision you can make differently. "Our targeting was too broad" is an observation that sounds actionable but is too vague to enforce in the next brief. The retrospective prompt is structured to push toward the specific decision, not the general observation.
The next campaign brief as the test of a retrospective
A retrospective that does not change the next campaign brief is decorative. The final output in today's prompt is a one-paragraph starting brief for the next campaign that directly incorporates the lessons from this one. If you can write that brief and it looks substantively different from the brief that launched the campaign you just reviewed, the retrospective worked. If it looks similar, the lessons were not specific enough.
The brief paragraph is the practical bridge between review and action. It closes the loop between what you learned and what you do next, which is the only measure of a retrospective that matters.
Prompt of the day
Copy this into your AI tool and replace any bracketed placeholders.
Prompt
You are a marketing strategist who facilitates post-campaign debriefs. Your role is to help me run a structured retrospective on a recent campaign — not to make us feel good about what happened, but to extract the decisions, assumptions, and outcomes that will make the next campaign meaningfully better. The campaign I am reviewing: [e.g. a 6-week lead generation campaign in Q1 targeting mid-market manufacturing companies, using LinkedIn ads, a content download, and an email nurture sequence] The goal we set: [e.g. 80 marketing-qualified leads at a cost-per-lead below £95] The actual result: [e.g. 54 MQLs at £138 CPL — missed both targets] What we spent: [e.g. £7,452 total — £5,200 on LinkedIn ads, £1,400 on content production, £852 on tooling] What the data showed — paste any metrics you have: [PASTE CHANNEL PERFORMANCE DATA, conversion rates, engagement metrics, pipeline generated, etc.] What I already think went wrong: [e.g. the content offer was too top-of-funnel for the audience we were targeting; the LinkedIn targeting was too broad and we ran out of budget before optimising] What I think went right: [e.g. the email nurture sequence had strong engagement — 38% open rate and 11% click rate, which is well above our average] Run a structured retrospective with the following sections: 1. Result vs. goal summary — one paragraph framing what happened, without blame or spin. 2. What worked — three specific things that performed above expectation, with the data or evidence for each. Not general observations — specific decisions that produced specific outcomes. 3. What did not work — three specific things that underperformed, with a hypothesis for why each one failed. Be direct. 4. The critical decision — identify the single decision made before or during the campaign that most affected the outcome (positively or negatively), and explain what a different decision might have produced. 5. What to carry forward — three specific practices, formats, or approaches to use in the next campaign. 6. What to stop — two things not to repeat, and why. 7. The next campaign brief — a one-paragraph starting brief for the next campaign that incorporates the lessons from this one.
Your 15-minute task
Pick a campaign that ended in the last 90 days — not your best one, not your worst one, just a real one you have enough data on. Fill in every field with your actual numbers and honest observations. The 'what I already think went wrong' field is the most important — do not sanitise it. Run the prompt. Read the 'critical decision' section first: if it identifies a decision you recognise as consequential, the retrospective is working. Share the 'what to carry forward' and 'what to stop' sections with your team this week while the campaign is still in memory.
Expected win
A structured campaign retrospective — result summary, what worked and what did not with evidence, the single most consequential decision, three things to carry forward, two things to stop, and a starting brief for the next campaign — completed in 20 minutes rather than a half-day workshop.
Power user tip
After the retrospective, ask: 'Based on everything in this debrief, write me three questions I should ask at the campaign kick-off meeting for our next campaign to prevent us from making the same critical decision again — and for each question, explain which finding from this retrospective it addresses.' Pre-mortem questions built from real post-mortem data are far sharper than generic planning checklists.