Stop Patching, Start Diagnosing
You’ve been here before.
A stakeholder looks at your work and says “something feels off.” You fix the thing they pointed at. Next review: “better, but still not quite right.” You fix the next thing. Third round: “closer, but…” By now you’ve touched every pixel, rewritten every line, and neither of you can articulate what “right” even looks like. You’re exhausted. They’re frustrated. The work is a pile of patches that’s lost whatever clarity it once had.
This is the most common failure mode in product work, and it has nothing to do with skill. It’s a thinking trap. Two people, both smart, both well-intentioned, locked in a loop where each round of feedback produces a surface fix that never reaches the real problem.
I watched this happen in miniature last week — not between two people, but between me and my AI agent.
I asked it to turn an article into a Xiaohongshu image carousel. Five versions came back. I said “the voice is wrong,” it fixed the voice. “No reader value,” it added value. “Needs a personal story,” it rewrote the story. “All text, no visuals,” it added screenshots. Each version was measurably better on the dimension I’d complained about. And I was still unsatisfied.
Five rounds of patches. Zero rounds of diagnosis.
The real issue wasn’t the voice, the value, the story, or the visuals. It was that we never aligned on the most basic question: what is this content supposed to achieve, and for whom? What does the reader walk away with? What does success look like? We dove straight into execution — typefaces, slide layouts, copy tone — without ever establishing what we were building toward.
Once I finally articulated the goal — “I want a reader to watch me use this AI system and think: I could do that too” — the answer was obvious. That goal requires showing a process unfolding in real time. It’s a video, not a carousel. No amount of slide redesign could have gotten there, because the destination was never defined.
Toyota figured this out decades ago and called it the Five Whys. When something breaks, don’t fix the first symptom you see. Ask why it broke. Then ask why that happened. Keep going until you hit something foundational.
In my case:
Why unsatisfied after five versions? → Each version fixed a symptom but missed something deeper.
Why? → We were iterating on execution details without a clear success criteria.
Why no success criteria? → We jumped from “turn this into a Xiaohongshu post” straight into production.
Why? → The agent had a carousel tool ready, so it started building.
Why didn’t anyone stop to align on the goal? → Because having a tool that can start immediately makes it feel unnecessary to think first.
Root cause: not the copy, not the format, not the design. We never defined what “good” looked like before we started making things.
The pattern is always the same. Someone says “the onboarding flow is too complicated.” You simplify. Still not right. You simplify more. Twelve experiments later, someone finally asks: “wait — what is onboarding actually supposed to accomplish? Are we trying to get users to their first value moment? Or are we trying to collect profile data for personalization? Because those are different goals and they lead to completely different flows.”
That one question — asked at the beginning — would have saved weeks. Not “how do we make this better?” but “what are we trying to achieve, and how will we know we’ve achieved it?”
When you’ve fixed the stated problem twice and they’re still unsatisfied, stop fixing. You’re not misexecuting. You’re misaligned on what success looks like.
But there’s a second trap hiding inside the first, and it’s on the other side of the table.
I gave five rounds of feedback. Every single one was a symptom report: “no human voice,” “no value,” “needs a story,” “show the receipts.” Not once did I stop and say what I actually wanted the content to do. If I’d said after round one — “the goal is for a reader to see my actual workflow and feel like they could replicate it; I don’t think static slides can deliver that” — we would have skipped four rounds.
Vague feedback feels honest. But it shifts all the diagnostic burden to the other person. “Something feels off” is not feedback — it’s a sensation. Feedback is when you try to articulate what success would look like, or at least which level the problem lives at: surface (copy, color)? Structural (flow, architecture)? Foundational (wrong goal, wrong audience, wrong format)?
You don’t always know. But “I don’t think this achieves the goal — and actually, have we agreed on what the goal is?” is a thousand times more useful than “this doesn’t feel right.”
Good feedback isn’t just honest. It’s diagnostic. It names the level. It saves everyone rounds.
Two sides of the same coin:
Receiving feedback: resist the patch reflex. Before fixing anything, make sure you and the stakeholder agree on what success looks like. If you can’t describe the target, you can’t hit it, no matter how many rounds you iterate.
Giving feedback: don’t just report your symptom. Ask yourself why it bothers you. Try to say what “good” would look like, or at least flag that you think the problem is more fundamental than the surface. A diagnosis saves rounds. A symptom starts a loop.
After this experience I renamed my AI’s Xiaohongshu skill. The old version was called “Xiaohongshu Carousel” — it jumped straight into building slides. The new version starts with a Format Gate: before choosing any template, first ask what this content is trying to achieve and who it’s for. Then decide the format. Then build.
Small change. But it encodes the question I keep forgetting to ask — whether I’m building or reviewing:
Do we actually agree on what we’re trying to achieve? Or are we just patching our way forward and hoping we’ll know “right” when we see it?