Home/Guides/Defining the Task in Your AI Prompt
Prompt Engineering Basics

Defining the Task in Your AI Prompt

Find out how clearly stating the task in your prompt is the single biggest factor in AI output quality.

8 min read

If context is the background and role is the perspective, the task is the engine of your prompt — it's the explicit instruction that tells the model what to produce. Vague tasks produce vague output. Specific, well-formed tasks produce specific, usable output. Understanding exactly how to write a task instruction is the most transferable skill in prompt engineering, because it applies to every type of request across every AI tool.

The Task Is the Core of Every Prompt

The task is the verb — the action you want the AI to perform. Summarize, write, list, compare, explain, generate, rewrite, classify, translate, debug. Without a clear task, even excellent role and context information is wasted because the model doesn't know what to produce. This sounds obvious, but a surprising number of prompts lack a clear task entirely. They provide background, express a vague desire, and then expect the model to infer the action. The model will usually infer something — but it's rarely exactly what you wanted.

Vague Tasks vs. Specific Tasks

The difference between a vague task and a specific task isn't usually the length of the instruction — it's the specificity of the verb and the scope of the action. 'Write something about email marketing' is vague. 'Write a 3-email drip sequence for re-engaging SaaS customers who signed up but never completed setup, with one email per day, starting 3 days after inactivity, each under 150 words' is specific. The second prompt has the same core action (write emails) but defines the number, subject, timing, constraints, and word count. Every one of those additions reduces the model's degrees of freedom and increases the odds that the output matches what you actually need.

How to Write a Task That Leaves No Ambiguity

A well-formed task instruction answers four questions before the model has to ask them: What action? (write, list, summarize), On what subject or material? (this article, the following code, the meeting transcript below), For what purpose or outcome? (so that a customer can understand, so that it can be published on a career site), and in what format or quantity? (as 5 bullet points, as a 300-word paragraph, as a JSON object). You don't need to answer all four in every prompt, but the more you do, the less guessing the model has to do and the more useful the output will be.

Breaking Complex Tasks Into Multiple Steps

For tasks with multiple distinct components, listing each step explicitly consistently outperforms bundling everything into a single instruction. Instead of 'write a complete marketing plan,' write: '1. List 5 target audience segments for this product. 2. For each segment, write one sentence describing their primary pain point. 3. Write a positioning statement that addresses the most important segment.' This approach reduces errors, makes the output easier to verify, and allows you to catch and correct mistakes at each stage before they propagate to the next. Numbered steps also help the model track its own progress through complex multi-part tasks.

Task Scope: When Smaller Is Better

One of the most common task definition errors is making a single prompt responsible for too much. A prompt that asks for research, synthesis, structuring, writing, editing, and formatting all at once will almost always produce a mediocre result at each stage. Narrowing the scope of each task — even if it means using multiple prompts — produces better output at every step. Think of it like work delegation: you wouldn't give one person a brief and say 'turn this into a finished product.' You'd have different people handle strategy, writing, and editing. The same principle applies to prompt chains.

Task Definitions That Work for Different Output Types

Different output types benefit from different task structures. For written content, specify type, length, audience, tone, and purpose. For code, specify language, function, inputs, outputs, and any libraries to avoid. For analysis, specify what to analyze, what to look for, how many items to identify, and how to present findings. For summaries, specify what to include, what to exclude, the target length, and who the summary is for. Building up a personal library of task templates for your most common AI use cases will save you significant time and produce consistently better results than starting from scratch each time.

Prompt examples

✗ Weak prompt
Help me with my presentation.

No action is specified. Should it write slides? Create an outline? Suggest improvements? Review existing content? The model will guess and probably miss.

✓ Strong prompt
Act as a presentation coach. I have a 10-minute pitch to enterprise sales prospects. Review the following outline and: 1) Identify the 2 slides most likely to lose audience attention, 2) Suggest one specific improvement for each, 3) Flag any jargon that might confuse non-technical buyers. Outline: [paste outline here]

Three numbered subtasks, each specific and bounded. The model knows exactly what to produce for each step, and the output will be directly actionable.

Practical tips

  • Always use an action verb to open your task instruction: write, list, summarize, compare, generate, rewrite, debug, classify.
  • Define scope explicitly — how many items, how many words, how many steps — or the model will choose its own scope.
  • For multi-part tasks, use numbered steps rather than one long sentence stringing everything together.
  • Test a simplified version of your task before adding complexity — if the simple version fails, adding complexity won't fix it.
  • For recurring tasks, save your best task instructions as templates to reuse and refine over time.

Continue learning

Using Constraints in PromptsControlling Output FormatPrompt Chaining for Complex Tasks

PromptIt structures your task into clear, actionable instructions automatically so you don't have to.

PromptIt applies these prompt engineering principles automatically to build better prompts for your specific task.

✦ Try it free

More Prompt Engineering Basics guides

What is Prompt Engineering?

Learn what prompt engineering is and why it matters for getting better

9 min · Read →

How to Use Role in AI Prompts

Discover how assigning a role to an AI model shapes its tone, expertis

8 min · Read →

How to Add Context to AI Prompts

Learn how providing background context in your prompts leads to more a

8 min · Read →

Using Constraints in AI Prompts

Learn how adding constraints to your prompts keeps AI output focused,

8 min · Read →
← Browse all guides