The Future Of Software engineering and AI: What YOU can do about it

Webinar starts in

00DAYS
:
00HRS
:
00MINS
:
00SEC
Join the Webinar

Lessons

Explore all newline lessons

Tags
Author
Pricing
Sort By
Video
Most Recent
Most Popular
Highest Rated
Reset
https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Instructional Finetuning & LoRAAI Bootcamp

Differentiate fine-tuning vs instruction fine-tuning, apply LoRA/BitFit/prompt tuning, and use Hugging Face PEFT for JSON, tone, or domain tasks.

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Multi-Head Attention & Mixture of Experts (MoE)AI Bootcamp

Build single-head and multi-head transformer models, implement Mixture-of-Experts (MoE) attention, and evaluate fluency/generalization.

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Implementing Self-AttentionAI Bootcamp

Implement self-attention in PyTorch, visualize attention heatmaps with real LLMs, and compare loss curves vs trigram models.

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Mechanics of Self-AttentionAI Bootcamp

Learn self-attention mechanics (Query, Key, Value, dot products, weighted sums), compute attention scores, and visualize softmax’s role.

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Motivation for Attention MechanismsAI Bootcamp

Understand limitations of fixed-window n-gram models and explore how word meaning changes with context (static vs contextual embeddings).

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Neural N-Gram ModelsAI Bootcamp

One-hot encode inputs, build PyTorch bigram/trigram neural networks, train with cross-entropy loss, and monitor training dynamics.

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Evaluating N-Gram ModelsAI Bootcamp

Evaluate model quality using entropy, character diversity, and negative log likelihood (NLL).

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Building and Sampling N-Gram ModelsAI Bootcamp

Construct frequency dictionaries, normalize into probability matrices, and sample random text using bigram/trigram models.

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

Introduction to N-Gram ModelsAI Bootcamp

Understand n-grams and their use in modeling language with simple probabilities, implement bigram/trigram extraction with sliding windows.

https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

lesson

RAG Evaluation & ImplementationAI Bootcamp

Evaluate RAG with `recall@k`, `precision@k`, `MRR`, generate synthetic data with LLMs, and implement baseline vector search with LanceDB and OpenAI embeddings.