The Iterative Mindset Shift
The mindset that underlies iterative prompting: you are not placing an order and waiting for delivery. You are collaborating with a fast, capable, sometimes miscalibrated partner. The first output tells you two things: what the model understood from your prompt, and where that understanding diverges from what you actually want. Every divergence is a signal about what to clarify in the next iteration. This is a fundamentally different relationship with AI than the 'type and hope' approach — and it consistently produces better outcomes across more complex tasks.
Targeted vs. Broad Iteration Instructions
The quality of your iteration instructions determines how efficiently you converge on the right output. Broad instructions ('make it better', 'try again', 'I don't like it') force the model to guess what you didn't like — and it often guesses wrong, producing a second draft that fixes things that were fine and leaves the actual problems unchanged. Targeted instructions identify exactly what to change and how: 'the opening paragraph is too generic — rewrite it to open with the specific failure case the reader is likely experiencing, not a general statement about the problem.' Targeted iteration is surgical; broad iteration is random.
Common Iteration Patterns
Different types of output quality failures require different iteration approaches. For tone problems: 'the tone is too formal for the audience — rewrite in a more conversational register, as if explaining to a colleague.' For structure problems: 'the structure buries the main recommendation — restructure so the recommendation appears in the first paragraph, with the supporting reasoning following.' For specificity problems: 'replace the generic example in paragraph 3 with a concrete scenario from [specific domain].' For length problems: 'this is 40% too long — cut without losing any substance, prioritize cutting filler phrases and redundant points.' Each pattern targets a specific failure mode.
Saving Versions During Iteration
Before making significant iterations on a response you're generally happy with, save the current version. This is particularly important in conversational AI interfaces where you can't easily go back: copy the current output to a document before asking for a major revision. The reason: iterating on a good draft can produce a worse draft if the iteration instruction is slightly off — and you may want to return to an earlier version. Some AI interfaces have built-in branching or version history; when they don't, manual copying is your safety net.
Knowing When to Stop and When to Restart
There are two situations where continued iteration becomes counterproductive. First: when each iteration fixes one thing and breaks another, suggesting that the underlying prompt structure is wrong rather than the execution. This is the signal to stop iterating and restart with a better-structured initial prompt that incorporates what you've learned from the failed iterations. Second: when the output is 'good enough' for the actual use case. Prompting toward theoretical perfection on a first draft you're going to edit anyway is wasted time. The right standard for 'done' is: does this output serve the purpose I have for it? Not: is every word perfect?