Video: Prompt Tuning

Summary

Large language models (LLMs) like ChatGPT are foundation models – large reusable models trained on vast amounts of internet data.

– To improve LLMs for a specialized task, fine-tuning was previously used – gathering and labeling many examples to tune the model.

– A newer approach is prompt tuning – using cues or prompts to give the model task-specific context without retraining.

– Prompt engineering involves humans crafting prompts to guide the LLM.

– Soft prompts are AI-generated prompts that outperform human prompts. They are opaque numbers that prime the model.

– With prompt tuning, a soft prompt is combined with the input to specialize the model.

– Prompt tuning adapts models faster and cheaper than fine-tuning or prompt engineering.

– It allows quick switching between tasks in multitask learning.

– Prompt tuning shows promise for continual learning – learning new skills without forgetting old ones.

– Overall, prompt tuning is a game changer to adapt LLMs to new specialized tasks quickly and efficiently.