Home/Glossary/Transfer Learning
Training

Transfer Learning

Reusing a model trained on one task as the starting point for a related task.

Full Definition

Transfer learning is the machine learning practice of initialising a model with weights pretrained on a source task (usually a large, general dataset) and then adapting it to a target task (usually smaller and more specific). In NLP, this means starting from a large pretrained language model and fine-tuning on task-specific data. Transfer learning dramatically improves performance on low-resource tasks where training from scratch would overfit or fail. The dramatic success of models like BERT and GPT demonstrated that pretrained language representations transfer broadly across NLP tasks, making task-specific training from scratch largely obsolete for language applications.

Examples

1

Using a ResNet pretrained on ImageNet as the backbone for a medical X-ray classifier trained on only 5,000 labelled scans.

2

Fine-tuning BERT on a 1,000-example contract clause classification dataset, achieving 94% accuracy that would be impossible training from scratch.

Apply this in your prompts

PromptITIN automatically uses techniques like Transfer Learning to build better prompts for you.

✦ Try it free

Related Terms

Fine-Tuning

Continuing training of a pretrained model on a smaller, task-specific dataset to

View →

Pretraining

The initial phase of training a model on massive text data to learn general lang

View →

Foundation Model

A large model trained on broad data that can be adapted to many downstream tasks

View →
← Browse all 100 terms