How RAG Finetuning and RLHF Fits in Production
- End-to-End LLM Finetuning & Orchestration using RL - Prepare instruction-tuning datasets (synthetic + human) - Finetune a small LLM on your RAG tasks - Use RL to finetune the same dataset and compare results across all approaches - Select the appropriate finetuning approach and build RAG - Implement orchestration patterns (pipelines, agents) - Set up continuous monitoring integration using Braintrust - RL Frameworks in Practice - Use DSPy, OpenAI API, LangChain's RLChain, OpenPipe ART, and PufferLib for RLHF tasks - Rubric-Based Reward Systems - Design interpretable rubrics to score reasoning, structure, and correctness - Real-World Applications of RLHF - Explore applications in summarization, email tuning, and web agent fine-tuning - RL and RLHF for RAG - Apply RL techniques to optimize retrieval and generation in RAG pipelines - Use RLHF to improve response quality based on user feedback and preferences - Exercises: End-to-End RAG with Finetuning & RLHF - Finetune a small LLM (Llama 3.2 3B or Qwen 2.5 3B) on ELI5 dataset using LoRA/QLoRA - Apply RLHF with rubric-based rewards to optimize responses - Build production RAG with DSPy orchestration, logging, and monitoring - Compare base → finetuned → RLHF-optimized models