Home/Glossary/Fine-Tuning
Training

Fine-Tuning

Continuing training of a pretrained model on a smaller, task-specific dataset to specialise its behaviour.

Full Definition

Fine-tuning adapts a pretrained model by running additional gradient descent steps on a curated dataset relevant to a target task or domain. Unlike training from scratch, fine-tuning requires far less data and compute because the model retains its broad language understanding; only its task-specific weights are adjusted. Full fine-tuning updates all model parameters, while parameter-efficient methods (LoRA, QLoRA) update only small adapter layers. Fine-tuning is used to teach models a specific output style, inject domain knowledge not in the pretraining data, or improve performance on narrow benchmarks. It is most beneficial when prompting alone cannot achieve the required consistency.

Examples

1

Fine-tuning GPT-3.5-turbo on 2,000 labelled medical Q&A pairs to reduce hallucination rates on clinical questions.

2

Fine-tuning a code model on a startup's internal Python SDK documentation so it can autocomplete proprietary function calls.

Apply this in your prompts

PromptITIN automatically uses techniques like Fine-Tuning to build better prompts for you.

✦ Try it free

Related Terms

LoRA (Low-Rank Adaptation)

A parameter-efficient fine-tuning method that updates only small low-rank matric

View →

Instruction Tuning

Supervised fine-tuning on diverse instruction-response pairs to improve a model'

View →

Transfer Learning

Reusing a model trained on one task as the starting point for a related task.

View →
← Browse all 100 terms