lesson
Attention LayerPower AI course- Why context is fundamental in LLMs - Limits of n-grams, RNNs, embeddings - Self-attention solves long-range context - QKV: query–key–value mechanics - Dynamic contextual embeddings per token - Attention weights determine word relevance - Multi-head attention = parallel perspectives - GQA reduces attention compute cost - Mixture-of-experts for specialized attention - Editing and modifying transformer layers - Decoder-only vs encoder–decoder framing - Building context-aware prediction systems
lesson
Multimodal EmbeddingsPower AI course- Foundations of multimodal representation learning - Text, image, audio, video embeddings - Contrastive learning for cross-modal alignment - Shared latent spaces across modalities - Vision encoders and patch tokenization - Transformer encoders for text meaning - Audio preprocessing and spectral features - Time-series tokenization via SAX or VQ - Fusion modules for modality alignment - Cross-attention for integrated reasoning - Zero-shot retrieval and multimodal search - Real-world multimodal applications overview
lesson
Tokens and EmbeddingsPower AI course- Tokenization as dictionary for model input - Tokens → IDs → contextual embeddings - Semantic meaning emerges only in embeddings - Transformer layers reshape embeddings by context - Pretrained embeddings accelerate domain understanding - Good tokenization reduces loss, improves learning - Tokenizer choice impacts RAG chunking - Compression tradeoffs differ by domain needs - Tokenization affects inference cost and speed - Compare BPE, SentencePiece, custom tokenizers - Emerging trend: byte-level latent transformers - Generations of embeddings add deeper semantics - Similarity measured via dot products, distance - Embeddings enable search, clustering, retrieval systems
lesson
Orientation — Technical KickoffAI Accelerator- Jupyter & Python Setup - Understanding why Python is used in AI (simplicity, libraries, end-to-end stack) - Exploring Jupyter Notebooks: shortcuts, code + text blocks, and cloud tools like Google Colab - Hands-On with Arrays, Vectors, and Tensors - Creating and manipulating 2D and 3D NumPy arrays (reshaping, indexing, slicing) - Performing matrix operations: element-wise math and dot products - Visualizing vectors and tensors in 2D and 3D space using matplotlib - Mathematical Foundations in Practice - Exponentiation and logarithms: visual intuition and matrix operations - Normalization techniques and why they matter in ML workflows - Activation functions: sigmoid and softmax with coding from scratch - Statistics and Real Data Practice - Exploring core stats: mean, standard deviation, normal distributions - Working with real datasets (Titanic) using Pandas: filtering, grouping, feature engineering, visualization - Preprocessing tabular data for ML: encoding, scaling, train/test split - Bonus Topics - Intro to probability, distributions, classification vs regression - Tensor intuition and compute providers (GPU, Colab, cloud vs local)
lesson
Orientation — Course IntroductionAI Accelerator- Meet the instructors and understand the support ecosystem (Circle, Notion, async help) - Learn the 4 learning pillars: concept clarity, muscle memory, project building, and peer community - Understand course philosophy: minimize math, maximize intuition, focus on real-world relevance - Set up accountability systems, learning tools, and productivity habits for long-term success