Day 6: When AI Gets It Wrong — And What To Do
The Concept
Knowing what AI gets wrong, why it gets it wrong, and what to do when it does is not pessimism — it is what separates careful and effective AI users from people who occasionally get embarrassed by acting on something that turned out to be false. This day is about building a reflex, not a fear.
The three ways AI fails
Hallucination is the term used when AI generates something factually incorrect while presenting it with the same confident fluency it uses for accurate information. It might invent a statistic, produce a plausible-sounding but non-existent book title, or describe an event that never happened. This happens because AI is generating language based on patterns — it is not retrieving verified facts. The output looks right, reads right, and sounds right. It simply is not. The word "hallucination" is somewhat charming; the effect can range from mildly confusing to genuinely harmful if you act on it.
Outdated knowledge is a different kind of failure. AI models have a training cutoff — a date after which they were not trained on new information. Events, companies, laws, people, and research that emerged or changed after that cutoff are either absent from the model's knowledge or described in an outdated state. The cutoff varies by model but is typically several months to two years before the day you are using the tool. For anything time-sensitive — recent events, current prices, the latest guidance on something — this matters.
Bias is the subtlest of the three. AI training data reflects the world that produced it, including its inequalities, blind spots, and dominant perspectives. Models trained primarily on English-language internet content reflect the views, assumptions, and gaps of that corpus. This can show up as underrepresentation of certain groups, assumptions about who "normal" refers to, or framing that feels skewed in ways that are hard to pinpoint. It is not random noise — it is systematic, and it is worth being aware of, particularly when AI is helping you write about people or make judgements.
Your verification habit
The practical response to all three failures is a simple question: does this matter enough to check? For casual tasks — brainstorming, drafting, organising ideas — the answer is usually no. For anything where you will act, share, publish, or make a decision based on what AI told you, the answer is usually yes.
When something matters, verify it in two steps. First, ask AI itself: "Which parts of your answer are you least certain about, and what would I search to confirm them?" This does not catch everything, but it reliably surfaces the places most worth checking. Second, go to one independent source — a search engine, an official website, or a reputable publication — and confirm the specific claim. Perplexity AI is useful for this because it shows you its sources, making fact-checking faster.
This habit applies not just to AI but to everything you read. AI makes it more necessary — not because AI is uniquely untrustworthy, but because it is unusually fluent and confident. Developing this reflex now will serve you far beyond this course.
Prompt of the day
Copy this into your AI tool and replace any bracketed placeholders.
Prompt
I am going to paste some AI-generated content below. Please review it and tell me: 1) Which specific claims or facts should I verify before acting on this? 2) What would be the best way to check each one — what would I search for, or where would I look? 3) Are there any parts that seem uncertain, speculative, or that I should treat with extra caution? Here is the content: [PASTE AI OUTPUT YOU WANT REVIEWED].
Your 15-minute task
Take any AI response you have received this week — from today's lessons or any other use. Run it through the verification prompt above. Notice which parts the AI flags as worth checking. Then pick one claim and actually verify it using a search engine or authoritative source. See if it holds up.
Expected win
A verification habit you can apply to any AI output — plus a clear intuition for which types of content are most likely to need checking before you act on them.
Power user tip
Build this into any high-stakes AI task: before you act on the output, open a new conversation and ask 'What should I verify in this before I rely on it?' It takes thirty seconds and prevents the most common AI-related mistakes.