Day 6: Analyse Employee Survey Data
The Concept
Most organisations spend significant time and money running employee surveys. The response rates, question design, and vendor selection receive genuine attention. What happens after the survey closes — the analysis, the communication, the action planning — receives far less. In many organisations, the gap between survey close and leadership presentation stretches to weeks. The gap between leadership presentation and visible action stretches to months. By the time employees see anything change as a result of the survey they filled in, they have often forgotten they participated.
This is not a data problem. Modern survey platforms generate more analytics than most HR teams have time to interpret. It is a translation problem: translating raw scores and open-text comments into a narrative that leadership can act on, within a timeframe that makes the response feel connected to the input. That translation work is time-consuming and requires both analytical judgement and presentational skill. AI can significantly compress the time it takes.
Why Employee Surveys Fail — Not at Collection, at Analysis
The failure mode of most employee surveys is not at the point of collection. Response rates are imperfect but manageable. The failure mode is at the point of analysis and action. Open-text comments are the richest data in any survey — they contain the nuance, the specific frustrations, the language employees use to describe their experience — but they are also the hardest to process at scale. Reading 400 open-text responses and extracting coherent themes takes hours. Most HR teams do not have those hours, so the comments get skimmed, the themes are intuited rather than evidenced, and the executive summary leans on the quantitative scores that are easier to report but less revealing.
The result is a presentation that shows leadership a series of bar charts and percentage scores, accompanied by generic statements about what the numbers suggest. Leadership receives the data in a form that is hard to connect to specific decisions. Questions like "what exactly are people unhappy about in the manager effectiveness score?" remain unanswered because the open-text analysis was not thorough enough to support a specific answer.
The Difference Between Quantitative and Qualitative Signal
Engagement scores tell you where problems exist. Open-text comments tell you what the problems actually are and how employees experience them. These two sources of data do not always agree, and the disagreements are often the most informative part of the analysis. A low score on career development accompanied by comments that consistently describe managers who do not discuss growth may point to a manager capability gap rather than a programme gap. A moderate engagement score accompanied by comments about workload, uncertainty, and leadership communication may suggest that the quantitative metric is flattering a more troubled picture.
AI is well suited to this kind of cross-referencing because it can process large volumes of text quickly, identify recurring language patterns, and surface tensions between what the numbers show and what the words say. It will not make the judgement call for you — whether a theme represents a systemic issue or a vocal minority requires human context. But it can do the pattern recognition work that currently takes hours, so you can spend your analytical time on interpretation rather than cataloguing.
Framing Findings for Executive Audiences
The most common mistake in survey reporting to leadership is leading with data rather than insight. A slide that shows "recognition score: 49%" is data. A slide that says "nearly half of employees do not feel their contributions are acknowledged — and the open-text comments suggest this is concentrated in the operations and customer service teams, where line manager praise is the primary recognition mechanism" is insight. The difference is not in the number — it is in the context, the specificity, and the implied action.
Leadership teams make decisions when they understand the cost of inaction and the available path forward. Your executive summary should answer three questions they will be asking silently: how bad is this, is it getting worse, and what are we being asked to do about it? The prompt today is designed to produce exactly that structure. Your job is to review the output for accuracy, add the organisational context only you have, and make sure the recommended actions are genuinely within leadership's power to deliver in 90 days.
Prompt of the day
Copy this into your AI tool and replace any bracketed placeholders.
Prompt
You are an HR analytics specialist who helps people teams extract insight from employee survey data and present findings to leadership. I have just completed an employee engagement survey at [COMPANY NAME] with approximately [NUMBER] respondents. Here is the raw data I need you to work with: Quantitative scores (paste your metric summary here, e.g.): [PASTE SCORES — e.g. 'Overall engagement: 62%. Manager effectiveness: 58%. Recognition: 49%. Career development: 44%. Inclusion: 71%.'] Open-text comments (paste a representative sample of verbatim responses here): [PASTE 10-20 OPEN-TEXT COMMENTS — copy and paste directly from your survey export] Please do the following: 1. Identify the three to five strongest themes in the open-text comments, with two or three direct quotes that illustrate each theme 2. Identify where the quantitative scores and the qualitative comments align, and where they contradict each other 3. Write a one-page executive summary of the findings, structured as: what we asked, what we heard, what needs attention, and what we recommend 4. Suggest three specific actions leadership could take in the next 90 days that are directly tied to the findings 5. Flag any comments that suggest an urgent individual concern that may need to be followed up outside the survey process Do not generalise beyond what the data shows. If a finding is ambiguous, say so.
Your 15-minute task
Pull your most recent engagement or pulse survey export. Copy your headline quantitative scores and paste 10 to 20 verbatim open-text comments into the prompt — the more varied the better. Run it and compare the AI's theme identification against your own reading of the data. Are there patterns it found that you had not noticed, or named differently than you would have?
Expected win
A one-page executive summary of your survey findings with identified themes, supporting quotes, alignment and tension analysis between scores and comments, three 90-day recommended actions, and a flagged-concerns note — ready to take into a leadership presentation.
Power user tip
Once you have the executive summary, run this follow-up: 'Now rewrite the three recommended actions as a simple decision paper for leadership. For each action: describe the problem it addresses, what doing nothing costs, what the proposed action involves, who owns it, and what success looks like at 90 days. Keep each action to half a page.' Decision papers with costs-of-inaction consistently move leadership faster than findings decks alone.