The Future Of Software engineering and AI: What YOU can do about it

Webinar starts in

00DAYS
:
00HRS
:
00MINS
:
00SEC
Join the Webinar
Tags
    Author
      Technology
        Rating
        Pricing
        Sort By
        Video
        Results To Show
        Most Recent
        Most Popular
        Highest Rated
        Reset
        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        Contrastive loss vs triplet lossAI bootcamp 2

        - Compare the two core objectives used for fine-tuning retrievers - Understand how each behaves in hard-negative-rich domains like code or finance

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        Query routing logic and memory-index hybridsAI bootcamp 2

        - Implement index routing systems where queries are conditionally routed: - short factual query → lexical index - long reasoning query → dense retriever - visual question → image embedding index - Learn how to fuse local memory with global vector stores for agentic long-term retrieval

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        Multi-vector DB vs standard DBAI bootcamp 2

        - Understand how multi-vector databases (e.g., ColBERT, Turbopuffer) store multiple vectors per document to support fine-grained relevance - Contrast this with standard single-vector-per-doc retrieval (e.g., FAISS), and learn when multi-vector setups are worth the extra complexity

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        Late interaction methods (ColQwen-Omni, audio+image chunks)AI bootcamp 2

        - Study late interaction architectures (like ColQwen-Omni) that separate dense retrieval from deep semantic fusion - Explore how these models support chunking and retrieval over image, audio, and video-text combinations using attention-based fusion at scoring time

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        Cartridge-based retrieval (self-study distillation)AI bootcamp 2

        - Learn how to modularize retrieval into topic- or task-specific “cartridges.” - Understand that cartridges are pre-distilled context sets for self-querying agents - Study how this approach is inspired by OpenAI’s retrieval plugin and LangChain’s retriever routers - See how cartridges improve retrieval precision by narrowing memory to high-relevance windows

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        RL in decoding, CoT prompting, and feedback loopsAI bootcamp 2

        - Understand how RL ideas are used without training by introducing dynamic feedback in inference - Apply reward scoring or confidence thresholds to adjust CoT (Chain-of-Thought) reasoning steps - Use external tools (e.g., validators or search APIs) as part of a feedback loop that rewards correct or complete answers - Understand how RL concepts power speculative decoding verification, scratchpad agents, and dynamic rerouting during generation

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        Q-learning & Policy Gradients (conceptual overview)AI bootcamp 2

        - Learn the concept of Q-learning as a method to estimate how good an action (token) is in a specific context (prompt state) - Learn the concept of Policy gradients as a method to directly optimize the probability distribution over actions to maximize long-term reward - Understand how Q-learning and Policy gradients form the basis of RLHF, DPO, and advanced training techniques for aligning LLM behavior

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        Monte Carlo vs Temporal Difference (TD) learningAI bootcamp 2

        - Explore the Monte Carlo and TD methods of learning from sequences

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        Markov Decision Processes (MDP) as LLM analogiesAI bootcamp 2

        - Learn how token generation in LLMs can be framed as a Markov process - Understand the key components of an MDP - Understand how these map conceptually to autoregressive decoding

        https://s3.amazonaws.com/assets.fullstack.io/n/20250812141855606_twitter.jpg

        lesson

        State-of-the-art decodersAI bootcamp 2

        - Explore decoding strategies that influence LLM output diversity and fluency - Top-k sampling - Learn how Top-k sampling truncates the output distribution to the k most likely tokens (e.g., k=16) - Understand how Top-k sampling balances creativity and control, and why it’s especially effective with small vocab sizes like byte-level models - Nucleus (Top-p) sampling - Learn how Nucleus (Top-p) sampling dynamically includes tokens up to a cumulative probability p (e.g., p=0.9) - Understand how Top-p sampling produces more adaptive and coherent completions than Top-k, especially in unpredictable generation tasks - Beam search - Learn how Beam search keeps multiple candidate completions in parallel and scores them to select the most likely overall path - Understand why Beam search is useful for deterministic outputs (e.g., code, structured data) and why it can lead to repetitive or bland completions in open-ended generation - Speculative decoding (OpenAI-style) - Learn how Speculative decoding speeds up inference by letting a small model propose multiple token candidates in parallel, which a larger model verifies - Understand how speculative decoding works internally and why it is gaining popularity in production systems like Groq and OpenAI APIs


        Articles

        view all ⭢