Showing results for "rlhf"

Tags
    Author
      Technology
        Rating
        Pricing
        Sort By
        Video
        Results To Show
        Most Recent
        Most Popular
        Highest Rated
        rlhf
        Reset

        lesson

        How RAG Finetuning and RLHF Fits in Production

        - End-to-End LLM Finetuning & Orchestration using RL - Prepare instruction-tuning datasets (synthetic + human) - Finetune a small LLM on your RAG tasks - Use RL to finetune the same dataset and compare results across all approaches - Select the appropriate finetuning approach and build RAG - Implement orchestration patterns (pipelines, agents) - Set up continuous monitoring integration using Braintrust - RL Frameworks in Practice - Use DSPy, OpenAI API, LangChain's RLChain, OpenPipe ART, and PufferLib for RLHF tasks - Rubric-Based Reward Systems - Design interpretable rubrics to score reasoning, structure, and correctness - Real-World Applications of RLHF - Explore applications in summarization, email tuning, and web agent fine-tuning - RL and RLHF for RAG - Apply RL techniques to optimize retrieval and generation in RAG pipelines - Use RLHF to improve response quality based on user feedback and preferences - Exercises: End-to-End RAG with Finetuning & RLHF - Finetune a small LLM (Llama 3.2 3B or Qwen 2.5 3B) on ELI5 dataset using LoRA/QLoRA - Apply RLHF with rubric-based rewards to optimize responses - Build production RAG with DSPy orchestration, logging, and monitoring - Compare base → finetuned → RLHF-optimized models

        lesson

        RL & RLHF Framework

        - DSPy + RL Integration - Explore DSPy's prompt optimizer and RL system built into the pipeline - LangChain RL - Use LangChain's experimental RL chain for reinforcement learning tasks - RL Fine-Tuning with OpenAI API - Implement RL fine-tuning using OpenAI's API - RL Fine-Tuning Applications - Apply RL fine-tuning for state-of-the-art email generation - Apply RL fine-tuning for summarization tasks - RL Fine-Tuning with OpenPipe - Use OpenPipe for RL fine-tuning workflows - DPO/PPO/GPRO Comparison - Compare Direct Preference Optimization, Proximal Policy Optimization, and GPRO approaches - Reinforcement Learning with Verifiable Rewards (RLVR) - Learn about RLVR methodology for training with verifiable reward signals - Rubric-Based RL Systems - Explore rubric-based systems to guide RL at inference time for multi-step reasoning - Training Agents to Control Web Browsers - Train agents to control web browsers with RL and Imitation Learning - Exercises: RL Frameworks & Advanced Algorithms - Compare DSPy vs LangChain for building QA systems - Implement GRPO and RLVR algorithms - Build multi-turn agents with turn-level credit assignment - Create privacy-preserving multi-model systems (PAPILLON) with utility-privacy tradeoffs

        lesson

        Intro RL & RLHF

        - Markov Processes as LLM Analogies - Frame token generation as a Markov Decision Process (MDP) with states, actions, and rewards - Monte Carlo vs Temporal Difference Learning - Compare Monte Carlo episode-based learning with Temporal Difference updates, and their relevance to token-level prediction - Q-Learning & Policy Gradients - Explore conceptual foundations of Q-learning and policy gradients as the basis of RLHF and preference optimization - RL in Decoding and Chain-of-Thought - Apply RL ideas during inference without retraining, including CoT prompting with reward feedback and speculative decoding verification - Exercises: RL Foundations with Neural Networks - Implement token generation as MDP with policy and value networks - Compare Monte Carlo vs Temporal Difference learning for value estimation - Build Q-Learning from tables to DQN with experience replay - Implement REINFORCE with baseline subtraction and entropy regularization


        Articles

        view all ⭢