You've probably heard the complaint: "I tried [insert latest AI agent] and it gave me junk. AI is overhyped." The real problem? You're asking a mechanic to build a house without providing blueprints. The agent isn't the weak link—your setup is. After spending more time than I'd like figuring this out, here's a workflow that consistently delivers results.
1. Pick the Model That Fits the Task
Match the model to the job. A sprinter (like Haiku) can attempt a complex distributed system architecture, but the answer won't be production-ready. Your job is to align model capabilities with the problem's complexity.

When to Use a Compact Model
For well-defined tasks with clear specs, acceptance criteria, and enumerated edge cases, a lighter model like Sonnet handles it fine. You'll spend more time reviewing the output, but you'll save money and spot flaws in your own specifications faster—a hidden benefit.
When to Use a Larger Model
If the feature is a tangled mess and you can't (or won't) break it down, hand the whole thing to a more capable model like Opus. You don't need to scope every subproblem, but you must define the complete solution. "Make it work" is not a valid requirement—it's a desperate wish the agent won't understand. A cheap model with precise specs beats an expensive model with vague feelings every time.
2. Plan in Conversation, Touch Code Last
I spend hours—many hours—talking through a problem before writing a single line of code. The AI becomes my rubber duck with attitude (I code that in because annoying accolades distract from the goal: a solid plan).
What to Cover in Your Planning Chat
- Meaningful tech stack – What tools and frameworks are involved.
- Desired outcome – The concrete result you want.
- Acceptance criteria – How you'll know it's done right.
- Test scenarios – Positive, negative, error, edge, weird, and seen.
- Explicit non-goals – What you are not building, to avoid scope creep.
Skip these steps and start prompting with "build me a thing," and you'll get a thing—just not your thing.

3. Maintain One Source of Truth
Stop duplicating instructions across multiple files. Pick one file—I use AGENTS.md—as the single source of truth, then create one-line markdown links from other files (like copilot-instructions, CLAUDE.md, GEMINI.md) pointing back to it. This gives you one file to manage instead of four.
Rules vs. Skills
If a rule is always true—for you as the operator or across an entire project—it doesn't belong in a skill. Skills are loaded only when triggered; instructions are loaded every time. Know which you need and use accordingly. The model should maintain AGENTS.md as it works—you don't need a separate MEMORY.md to muddy the waters. When the AI keeps violating the same rule, don't add another file; update the single source of truth.
Conclusion: Fix Your Setup, Fix Your Results
The agent isn't the problem—your setup is. Pick the right model, plan before coding, and maintain a single source of truth. None of this is clever, but it works. Start applying these principles, and you'll stop blaming the AI and start getting better results.