Day 18: Choose the Right AI Tool for the Job
The Concept
Most people who use AI regularly settle on one tool early and never question the choice. They found something that worked, and they stuck with it. This is understandable — learning a new tool takes effort, and the major AI assistants all appear, at first glance, to do roughly the same things. The problem is that this assumption quietly costs quality. The three main general-purpose tools — ChatGPT, Claude, and Gemini — are not interchangeable. Each has genuine strengths that matter in practice, and using the wrong one for a task is like using a good screwdriver to drive a nail. It works, but not as well as it should.
Beyond these three general-purpose tools, a separate category of specialist AI tools exists that solve problems the general tools handle poorly. These tools are not better overall — they are purpose-built for specific situations where their design gives them a significant advantage. Knowing when to reach for a specialist tool instead of a general one is one of the more valuable distinctions an experienced AI user develops. The goal is not to own or subscribe to every available tool. It is to know, when you have a specific task in front of you, which tool is the right choice and why.
Why one tool is not always enough
Each of the major general AI tools was built with different priorities, trained on different data mixes, and designed with different use cases in mind. These differences show up consistently in output quality when you compare them directly. A tool that produces excellent first drafts of casual emails may produce stiff, over-structured responses to nuanced personal dilemmas. A tool that excels at writing code may produce prose that sounds mechanical or generic. These are not random variations — they reflect genuine architectural and training differences that translate into real-world performance gaps.
The practical consequence is that defaulting to a single tool regardless of task means you are sometimes using a tool that is well-suited to what you are asking, and sometimes using one that is not. You may not notice the difference because you have no comparison point. Running the same prompt in two tools side by side — even once — makes the difference visible immediately. Outputs that felt adequate in isolation can look noticeably inferior next to a better-suited tool's version of the same request.
What each tool does best
ChatGPT has the widest ecosystem of plugins and integrations, the largest community producing prompts and tutorials, and strong performance on code-related tasks. If you are working within an existing app that has an AI integration, there is a good chance it runs on ChatGPT. Claude handles long documents well, produces nuanced writing that maintains a consistent voice, and follows complex multi-part instructions more reliably than most alternatives. It is particularly strong when the task requires careful reasoning or precise adherence to a specific format. Gemini is deeply integrated with Google Workspace — it works directly with Drive, Docs, Gmail, and Calendar — which makes it the practical choice for anyone whose work lives inside the Google ecosystem.
Knowing these distinctions does not require memorising a comparison chart. It requires only that you develop a rough intuition: long documents and nuanced writing go to Claude, code and integrations go to ChatGPT, and anything involving Google tools goes to Gemini. That intuition sharpens with every time you compare outputs directly.
The specialist tools worth knowing about
Two specialist tools are worth understanding regardless of your primary AI setup. Perplexity is built around live web search with citations. Where general AI tools draw on training data with a knowledge cutoff, Perplexity retrieves current information from the web and shows you exactly where each claim came from. Use it when you need information that is recent, time-sensitive, or where source verification matters — news developments, current prices, recent research, or any question where "as of my training data" is not good enough.
NotebookLM is built around your own documents. You upload materials — a PDF, a research paper, a set of notes, a long report — and it answers questions, generates summaries, and creates study guides based only on what you gave it, not on its general training. Use it when you need to work deeply with a specific document rather than ask general questions. Both tools solve problems that general AI handles poorly, and recognising which problem you have before you open a tool is the skill this lesson is designed to build.
Prompt of the day
Copy this into your AI tool and replace any bracketed placeholders.
Prompt
I want to use AI for the following task: [DESCRIBE YOUR TASK IN DETAIL]. I currently use: [TOOL NAME]. Based on this task, please: (1) Evaluate how well my current tool handles this type of task, (2) Tell me if a different general AI tool would do this better and why, (3) Tell me if a specialist tool would be better suited and which one, (4) Suggest one prompt I could use right now to test which tool gives the best result for this specific task.
Your 15-minute task
Pick one task you do regularly with AI. Run the evaluation prompt in the tool you currently use. If it recommends something different, try the same task in that tool and compare the outputs. You are building the habit of choosing tools deliberately rather than defaulting to habit.
Expected win
A clear understanding of which tool is best for your most common AI tasks — and at least one situation where you have tested two tools side by side.
Power user tip
For any high-stakes document — a cover letter, a contract summary, a presentation — run the same prompt in both Claude and ChatGPT. Compare not just the content but the tone and structure. Use the better version as your base and pull specific elements from the other.