Chain-of-Thought
A prompting technique that asks the model to reason step-by-step before giving a final answer.
Full Definition
Chain-of-thought prompting encourages a language model to decompose a problem into intermediate reasoning steps before producing its final response. Rather than jumping straight to an answer, the model narrates its logic, which surfaces errors early and dramatically improves accuracy on math, logic, and multi-step tasks. It can be triggered explicitly by adding phrases like 'think step by step' or implicitly by including worked examples that model step-by-step reasoning. The technique was popularised in a 2022 Google Brain paper and has since become a standard tool in the prompt engineer's toolkit.
Examples
Prompt: 'A train travels 60 km/h for 2 hours. How far does it go? Think step by step.' — Model first writes '60 × 2 = 120' then outputs '120 km'.
Asking a model to solve a logic puzzle by writing out each deduction before concluding who committed the crime.
Apply this in your prompts
PromptITIN automatically uses techniques like Chain-of-Thought to build better prompts for you.
Related Terms
Tree of Thoughts
A framework that explores multiple reasoning branches in parallel and selects th…
View →Self-Consistency
Sampling multiple reasoning paths and selecting the most common answer to improv…
View →ReAct Prompting
Interleaving reasoning traces and actions so a model can use tools and reflect o…
View →