Home/Glossary/Base Model
Models

Base Model

A model trained only on next-token prediction over a large corpus, before any instruction tuning.

Full Definition

A base model (also called a pretrained model) has learned the statistical structure of language from massive text corpora but has not been fine-tuned to follow instructions or adopt a helpful persona. It continues text rather than answering questions — given 'The capital of France is', it predicts 'Paris' because that is the statistically expected continuation, not because it is trying to be helpful. Base models are the foundation on which instruction-tuned and RLHF-trained models are built. They are useful for researchers studying model capabilities and for practitioners who want maximum flexibility before layering application-specific fine-tuning.

Examples

1

GPT-3's original release was a base model; it required careful few-shot prompting rather than natural instruction-following.

2

Meta's Llama 3 base weights are released openly so developers can fine-tune them for their own use cases.

Apply this in your prompts

PromptITIN automatically uses techniques like Base Model to build better prompts for you.

✦ Try it free

Related Terms

Pretraining

The initial phase of training a model on massive text data to learn general lang

View →

Instruction-Tuned Model

A model fine-tuned on instruction-response pairs to follow natural-language dire

View →

Fine-Tuned Model

A pretrained model whose weights have been updated on a specific dataset for a t

View →
← Browse all 100 terms