Instructional Finetuning with LoRA (Mini Project 5)

- Understand the difference between fine-tuning and instruction fine-tuning (IFT) - Learn when to apply fine-tuning vs IFT vs RAG based on domain, style, or output needs - Explore lightweight tuning methods like LoRA, BitFit, and prompt tuning - Build instruction-tuned systems for outputs like JSON, tone, formatting, or domain tasks - Apply fine-tuning to real case studies: HTML generation, resume scoring, financial tasks - Use Hugging Face PEFT tools to train and evaluate LoRA-tuned models - Understand tokenizer compatibility, loss choices, and runtime hardware considerations - Compare instruction-following performance of base vs IFT models with real examples