Transfer Learning
Reusing a model trained on one task as the starting point for a related task.
Full Definition
Transfer learning is the machine learning practice of initialising a model with weights pretrained on a source task (usually a large, general dataset) and then adapting it to a target task (usually smaller and more specific). In NLP, this means starting from a large pretrained language model and fine-tuning on task-specific data. Transfer learning dramatically improves performance on low-resource tasks where training from scratch would overfit or fail. The dramatic success of models like BERT and GPT demonstrated that pretrained language representations transfer broadly across NLP tasks, making task-specific training from scratch largely obsolete for language applications.
Examples
Using a ResNet pretrained on ImageNet as the backbone for a medical X-ray classifier trained on only 5,000 labelled scans.
Fine-tuning BERT on a 1,000-example contract clause classification dataset, achieving 94% accuracy that would be impossible training from scratch.
Apply this in your prompts
PromptITIN automatically uses techniques like Transfer Learning to build better prompts for you.