Video: Finetuning

Summary of Finetuning vs Semantic Search

– Fine-tuning is a type of transfer learning that teaches a model a new task, but does not teach it new information.

– Semantic search allows searching based on meaning and context, not just keywords. It uses vector embeddings and scales well.

– Fine-tuning and semantic search are very different technologies. Fine-tuning is the wrong tool for doing QA on a corpus.

– Biggest misconception is that fine-tuning a model like GPT-3 on a new corpus will allow it to answer questions on that corpus.

– Fine-tuning only unfreezes a small part of the model. It does not retrain the whole model.

– Fine-tuning does not override hallucination or confabulation tendencies. The model can still make up answers.

– Fine-tuning is like transfer learning – you learn how to tie shoes, then you can tie other things. Not learning new info.

– So fine-tuning does not actually teach the model new information or content from your corpus.

– It just tweaks the model to be slightly better at a specific task, like QA. It does not add knowledge.