The Future Of Software engineering and AI: What YOU can do about it
Webinar starts in
Explore all newline lessons
lesson
Instructional Finetuning & LoRAAI BootcampDifferentiate fine-tuning vs instruction fine-tuning, apply LoRA/BitFit/prompt tuning, and use Hugging Face PEFT for JSON, tone, or domain tasks.
lesson
Multi-Head Attention & Mixture of Experts (MoE)AI BootcampBuild single-head and multi-head transformer models, implement Mixture-of-Experts (MoE) attention, and evaluate fluency/generalization.
lesson
Implementing Self-AttentionAI BootcampImplement self-attention in PyTorch, visualize attention heatmaps with real LLMs, and compare loss curves vs trigram models.
lesson
Mechanics of Self-AttentionAI BootcampLearn self-attention mechanics (Query, Key, Value, dot products, weighted sums), compute attention scores, and visualize softmax’s role.
lesson
Motivation for Attention MechanismsAI BootcampUnderstand limitations of fixed-window n-gram models and explore how word meaning changes with context (static vs contextual embeddings).
lesson
Neural N-Gram ModelsAI BootcampOne-hot encode inputs, build PyTorch bigram/trigram neural networks, train with cross-entropy loss, and monitor training dynamics.
lesson
Evaluating N-Gram ModelsAI BootcampEvaluate model quality using entropy, character diversity, and negative log likelihood (NLL).
lesson
Building and Sampling N-Gram ModelsAI BootcampConstruct frequency dictionaries, normalize into probability matrices, and sample random text using bigram/trigram models.
lesson
Introduction to N-Gram ModelsAI BootcampUnderstand n-grams and their use in modeling language with simple probabilities, implement bigram/trigram extraction with sliding windows.
lesson
RAG Evaluation & ImplementationAI BootcampEvaluate RAG with `recall@k`, `precision@k`, `MRR`, generate synthetic data with LLMs, and implement baseline vector search with LanceDB and OpenAI embeddings.