Bootcamp
Newline Image

AI Bootcamp

Everyone’s heard of ChatGPT, but what truly powers these modern large language models? It all starts with the transformer architecture. This bootcamp demystifies LLMs, taking you from concept to code and giving you a full, hands-on understanding of how transformers work. You’ll gain intuitive insights into the core components—autoregressive decoding, multi-head attention, and more—while bridging theory, math, and code. By the end, you’ll be ready to understand, build, and optimize LLMs, with the skills to read research papers, evaluate models, and confidently tackle ML interviews.

  • 5.0 / 5 (1 rating)
  • Published
  • Updated
Bootcamp Instructors
Avatar Image

Dr. Dipen

I am an AI/ML researcher with 150+ citations and 16 published research papers. I have three tier-1 publications, including Internet of Things (Elsevier), Biomedical Signal Processing and Control (Elsevier), and IEEE Access. In my research journey, I have collaborated with NASA Glenn Research Center, Cleveland Clinic, and the U.S. Department of Energy for various research projects. I am also an official reviewer and have reviewed over 100 research papers for Elsevier, IEEE Transactions, ICRA, MDPI, and other top journals and conferences. I hold a PhD from Cleveland State University with a focus on large language models (LLMs) in cybersecurity, and I also earned a master’s degree in informatics from Northeastern University.

Avatar Image

zaoyang

Owner of \newline and previously co-creator of Farmville (200M users, $3B revenue) and Kaspa ($3B market cap). Self-taught in gaming, crypto, deep learning, now generative AI. Newline is used by 250,000+ professionals from Salesforce, Adobe, Disney, Amazon, and more. Newline has built editorial tools using LLMs, article generation using reinforcement learning and LLMs, instructor outreach tools. Newline is currently building generative AI products that will be announced soon.

How The Bootcamp Works

01Remote

You can take the course from anywhere in the world, as long as you have a computer and an internet connection.

02Self-Paced

Learn at your own pace, whenever it's convenient for you. With no rigid schedule to worry about, you can take the course on your own terms.

03Community

Join a vibrant community of other students who are also learning with AI Bootcamp. Ask questions, get feedback and collaborate with others to take your skills to the next level.

04Structured

Learn in a cohesive fashion that's easy to follow. With a clear progression from basic principles to advanced techniques, you'll grow stronger and more skilled with each module.

Bootcamp Overview

What You Will Learn
  • Understand the lifecycle of large language models, from training to inference

  • Build and deploy a fully functional LLM Inference API

  • Master tokenization techniques, including byte-pair encoding and word embeddings

  • Develop foundational models like n-grams and transition to transformer-based models

  • Implement self-attention and feed-forward neural networks in transformers

  • Evaluate LLM performance using metrics like perplexity

  • Deploy models using modern tools like Huggingface, Modal, and TorchScript

  • Adapt pre-trained LLMs through fine-tuning and retrieval-augmented generation (RAG)

  • Leverage state-of-the-art tools for data curation and adding ethical guardrails

  • Apply instruction-tuning techniques with low-rank adapters

  • Explore multi-modal LLMs integrating text, voice, images, and robotics

  • Understand machine learning operations, from project scoping to deployment

  • Design intelligent agents with planning, reflection, and collaboration capabilities

  • Keep up-to-date with AI trends, tools, and industry best practices

  • Receive technical reviews and mentorship to refine your projects

  • Create a robust portfolio showcasing real-world AI applications

In this bootcamp, we dive deep into Large Language Models (LLMs) to help you understand, build, and optimize their architecture for real-world applications. LLMs are revolutionizing industries—from customer support to content creation—but understanding how these models work and optimizing them for specific tasks presents unique challenges.

Over an intensive, multi-week curriculum, we cover:

The technical foundations of LLMs, including autoregressive decoding, positional encoding, and multi-head attention. The LLM lifecycle—from large-scale pretraining to fine-tuning and instruction tuning for niche applications. Industry best practices for model evaluation, identifying performance bottlenecks, and employing cutting-edge architectures to balance efficiency and scalability. This bootcamp includes hours of in-depth instruction, hands-on coding sessions, and access to a dedicated community for ongoing support and discussions. Additionally, you’ll have exclusive access to code templates, an expansive reference library, and downloadable resources for continuous learning.

Your expert guides through this bootcamp are:

Dr. Dipen Bhuva: Dr. Dipen is an AI/ML researcher with 150+ citations and 16 published research papers. He has 3 tier-1 publications, including Internet of Things (Elsevier), Biomedical Signal Processing and Control (Elsevier), and IEEE Access. In his research journey, he has collaborated with NASA-Glenn Centre, Cleveland Clinic, and the US department of energy for his research papers. He was an official reviewer and has reviewed 100+ research papers from Elsevier, IEEE Transactions, ICRA, MDPI, and other top journals and conferences. He has a PhD from Cleveland State University with a focus on LLMs in cybersecurity. He also has a master's in informatics at Northeastern University.

Zao Yang: Zao is a co-founder of Newline, a platform used by 150k professionals from companies like Salesforce, Adobe, Disney, and Amazon. Zao has a rich history in the tech industry, co-creating Farmville (200 million users, $3B revenue) and Kaspa ($3B market cap). Self-taught in deep learning, generative AI, and machine learning, Zao is passionate about empowering others to develop practical AI applications. His extensive knowledge of both the technical and business sides of AI projects will be invaluable as you work on your own.

With Dipen and Zao's guidance, you’ll gain practical insights into building and deploying advanced AI models, preparing you for the most challenging and rewarding roles in the AI field.

What You will Gain
  • Be able to build large language models, which can increase your salaries by $50k a year. Worth $500k over 10 years

  • Cheatsheet on generative AI interviews for FANGs, a $50k a year over a $500k value

  • A complete course on end to end streaming Langchain with a fully functional application for startups. $15k in value

  • Be able run consulting in AI $100k in annual value. Over 10 years. $1m

  • Be able to build an AI company $1M in annual value

  • Technical and business design review from Alvin and Zao about your project. $25000 dollars in value

  • $3.4M in value. This will be a $10k to $15k bootcamp in the future

  • Guaranteed help to complete your project

Newline Image

Our students work at

  • salesforce-seeklogo.com.svgintuit-seeklogo.com.svgAdobe.svgDisney.svgheroku-seeklogo.com.svgAT_and_T.svgvmware-seeklogo.com.svgmicrosoft-seeklogo.com.svgamazon-seeklogo.com.svg

Bootcamp Syllabus and Content

Week 1

Onboarding & Tooling

3 Units

  • 01
    AI Onboarding & Python Essentials
     
    • Welcome & Community

      • Course Overview
      • Community: Getting Started with Circle and Notion
    • Python & Tooling Essentials

      • Intro to Python: Why Python for AI and Why Use Python 3 (Not Python 2)
      • Install Python and Set Virtual Environments
      • Basic Python Introduction: Variables, Data Types, Loops & Functions
      • Using Jupyter Notebooks
    • Introduction to AI Tools & Ecosystem

      • Introduction to AI & Why Learn It
      • Models & Their Features: Choosing the Right AI Model for Your Needs
      • Finding and Using AI Models & Datasets: A Practical Guide with Hugging Face
      • Using Restricted vs Unrestricted Open-Source AI Models from Hugging Face
      • Hardware for AI: GPUs, TPUs, and Apple Silicon
      • Advanced AI Concepts Worth Knowing
      • Practical Tips on How to Be Productive with AI
    • Brainstorming Ideas with AI

      • Brainstorming with Prompting Tools
  • 02
    Orientation — Course Introduction
     
    • Meet the instructors and understand the support ecosystem (Circle, Notion, async help)
    • Learn the 4 learning pillars: concept clarity, muscle memory, project building, and peer community
    • Understand course philosophy: minimize math, maximize intuition, focus on real-world relevance
    • Set up accountability systems, learning tools, and productivity habits for long-term success
  • 03
    Orientation — Technical Kickoff
     
    • Jupyter & Python Setup

      • Understanding why Python is used in AI (simplicity, libraries, end-to-end stack)
      • Exploring Jupyter Notebooks: shortcuts, code + text blocks, and cloud tools like Google Colab
    • Hands-On with Arrays, Vectors, and Tensors

      • Creating and manipulating 2D and 3D NumPy arrays (reshaping, indexing, slicing)
      • Performing matrix operations: element-wise math and dot products
      • Visualizing vectors and tensors in 2D and 3D space using matplotlib
    • Mathematical Foundations in Practice

      • Exponentiation and logarithms: visual intuition and matrix operations
      • Normalization techniques and why they matter in ML workflows
      • Activation functions: sigmoid and softmax with coding from scratch
    • Statistics and Real Data Practice

      • Exploring core stats: mean, standard deviation, normal distributions
      • Working with real datasets (Titanic) using Pandas: filtering, grouping, feature engineering, visualization
      • Preprocessing tabular data for ML: encoding, scaling, train/test split
    • Bonus Topics

      • Intro to probability, distributions, classification vs regression
      • Tensor intuition and compute providers (GPU, Colab, cloud vs local)
Week 2

AI Projects and Use Cases

3 Units

  • 01
    Navigating the Landscape of LLM Projects & Modalities
     
    • Compare transformer-based LLMs vs diffusion models and their use cases
    • Understand the "lego blocks" of LLM-based systems: prompts, embeddings, generation, inference
    • Explore core LLM application types: RAG, vertical models, agents, and multimodal apps
    • Learn how LLMs are being used in different roles and industries (e.g., healthcare, finance, legal)
    • Discuss practical project scoping: what to build vs outsource, how to identify viable ideas
    • Identify limitations of LLMs: hallucinations, lack of reasoning, sensitivity to prompts
    • Highlight real-world startup examples (e.g., AutoShorts, HeadshotPro) and venture-backed tools
  • 02
    From Theory to Practice — Building Your First LLM Application
     
    • Understand how inference works in LLMs (prompt processing vs. autoregressive decoding)
    • Explore real-world AI applications: RAG, vertical models, agents, multimodal tools
    • Learn the five phases of the model lifecycle: pretraining to RLHF to evaluation
    • Compare architecture types: generic LLMs vs. ChatGPT vs. domain-specialized models
    • Work with tools like Hugging Face, Modal, and vector databases
    • Build a “Hello World” LLM inference API using OPT-125m on Modal
  • 03
    Intro to AI-Centric Evaluation
     
    • Metrics and Evaluation Design
    • Foundation for Future Metrics Work
    • Building synthetic data for AI applications
Week 3

Prompt Engineering & Embeddings

2 Units

  • 01
    Prompt Engineering — From Structure to Evaluation (Mini Project 1)
     
    • Learn foundational prompt styles: vague vs. specific, structured formatting, XML-tagging
    • Practice prompt design for controlled output: enforcing strict JSON formats with Pydantic
    • Discover failure modes and label incorrect LLM behavior (e.g., hallucinations, format issues)
    • Build early evaluators to measure LLM output quality and rule-following
    • Write your first "LLM-as-a-judge" prompts to automate pass/fail decisions
    • Iterate prompts based on analysis-feedback loops and evaluator results
    • Explore advanced prompting techniques: multi-turn, rubric-based human alignment, and A/B testing
    • Experiment with dspy for signature-based structured prompting and validation
  • 02
    Tokens, Embeddings & Modalities — Foundations of Understanding Text, Image, and Audio
     
    • Understand the journey from raw text → tokens → token IDs → embeddings
    • Compare word-based, BPE, and advanced tokenizers (LLaMA, GPT-2, T5)
    • Analyze how good/bad tokenization affects loss, inference time, and semantic meaning
    • Learn how embedding vectors represent meaning and change with context
    • Explore and manipulate Word2Vec-style word embeddings through vector math and dot product similarity
    • Apply tokenization and embedding logic to multimodal models (CLIP, ViLT, ViT-GPT2)
    • Conduct retrieval and classification tasks using image and audio embeddings (CLIP, Wav2Vec2)
    • Discuss emerging architectures like Byte Latent Transformers and their implications
Week 4

Multimodal + Retrieval-Augmented Systems

2 Units

  • 01
    Multimodal Embeddings (CLIP)
     
    • Understand how CLIP learns joint image-text representations using contrastive learning
    • Run your first CLIP similarity queries and interpret shared embedding space
    • Practice prompt engineering with images — and see how wording shifts retrieval results
    • Build retrieval systems: text-to-image and image-to-image using cosine similarity
    • Experiment with visual vector arithmetic: apply analogies to embeddings
    • Explore advanced tasks like visual question answering (VQA) and image captioning
    • Compare multimodal architectures: CLIP, ViLT, ViT-GPT2 and how they process fusion
    • Learn how modality-specific encoders (image/audio) integrate into transformer models
  • 02
    RAG & Retrieval Techniques (Mini Project 2)
     
    • Understand the full RAG pipeline: pre-retrieval, retrieval, and post-retrieval stages
    • Learn the difference between term-based and embedding-based retrieval methods (e.g., TF-IDF, BM25 vs. vector search)
    • Explore vector databases, chunking, and query optimization techniques like HyDE, reranking, and filtering
    • Use contrastive learning and cosine similarity to map queries and documents into shared vector spaces
    • Practice retrieval evaluation using recall@kprecision@k, and MRR
    • Generate synthetic data using LLMs (Instructor, Pydantic) for local eval scenarios
    • Implement baseline vector search pipelines using LanceDB and OpenAI embeddings (3-small, 3-large)
    • Apply rerankers and statistically validate results with bootstrapping and t-tests to build intuition around eval reliability
Week 5

Classical Language Models

2 Units

  • 01
    N-Gram Language Models (Mini Project 3)
     
    • Understand what n-grams are and how they model language with simple probabilities
    • Implement bigram and trigram extraction using sliding windows over character sequences
    • Construct frequency dictionaries and normalize into probability matrices
    • Sample random text using bigram and trigram models to generate synthetic sequences
    • Evaluate model quality using entropy, character diversity, and negative log likelihood (NLL)
    • One-hot encode inputs and build PyTorch models for bigram and trigram neural networks
    • Train models with cross-entropy loss and monitor training dynamics
    • Compare classical vs. neural models in terms of coherence, prediction accuracy, and generalization
  • 02
    Triplet Loss Embedding Finetuning for Search & Ranking (Mini Project 4)
     
    • Triplet-Based Embedding Adaptation
    • User-to-Music & E-commerce Use Cases
Week 6

Attention & Finetuning

2 Units

  • 01
    Building Self-Attention Layers
     
    • Understand the motivation for attention: limitations of fixed-window n-gram models
    • Explore how word meaning changes with context using static vs contextual embeddings (e.g., "bank" problem)
    • Learn the mechanics of self-attention: Query, Key, Value, dot products, and weighted sums
    • Manually compute attention scores and visualize how softmax creates probabilistic context focus
    • Implement self-attention layers in PyTorch using toy examples and evaluate outputs
    • Visualize attention heatmaps using real LLMs to interpret which words the model attends to
    • Compare loss curves of self-attention models vs trigram models and observe learning dynamics - Understand how embeddings evolve through transformer layers and extract them using GPT-2
    • Build both single-head and multi-head transformer models; compare their predictions and training performance
    • Implement a Mixture-of-Experts (MoE) attention model and observe gating behavior on different inputs
    • Evaluate self-attention vs MoE vs n-gram models on fluency, generalization, and loss curves
    • Run meta-evaluation across all models to compare generation quality and training stability
  • 02
    Instructional Finetuning with LoRA (Mini Project 5)
     
    • Understand the difference between fine-tuning and instruction fine-tuning (IFT)
    • Learn when to apply fine-tuning vs IFT vs RAG based on domain, style, or output needs
    • Explore lightweight tuning methods like LoRA, BitFit, and prompt tuning
    • Build instruction-tuned systems for outputs like JSON, tone, formatting, or domain tasks
    • Apply fine-tuning to real case studies: HTML generation, resume scoring, financial tasks
    • Use Hugging Face PEFT tools to train and evaluate LoRA-tuned models
    • Understand tokenizer compatibility, loss choices, and runtime hardware considerations
    • Compare instruction-following performance of base vs IFT models with real examples
Week 7

Architectures & Multimodal Systems

2 Units

  • 01
    Feedforward Networks & Loss-Centric Training
     
    • Understand the role of linear + nonlinear layers in neural networks
    • Explore how MLPs refine outputs after self-attention in transformers
    • Learn the structure of FFNs (e.g., two-layer projection + activation like ReLU/SwiGLU)
    • Implement your own FFN in PyTorch with real training/evaluation
    • Compare activation functions: ReLU, GELU, SwiGLU
    • Understand how dropout prevents co-adaptation and improves generalization
    • Learn the role of LayerNorm, positional encoding, and skip connections
    • Build intuition for how transformers encode depth, context, and structure into layers
  • 02
    Multimodal Finetuning (Mini Project 6)
     
    • Understand what CLIP is and how contrastive learning aligns image/text modalities
    • Fine-tune CLIP for classification (e.g., pizza types) or regression (e.g., solar prediction)
    • Add heads on top of CLIP embeddings for specific downstream tasks
    • Compare zero-shot performance vs fine-tuned model accuracy
    • Apply domain-specific LoRA tuning to vision/text encoders
    • Explore regression/classification heads, cosine similarity scoring, and decision layers
    • Learn how diffusion models extend CLIP-like embeddings for text-to-image and video generation
    • Understand how video generation differs via temporal modeling, spatiotemporal coherence
Week 8

Assembling & Training Transformers

2 Units

  • 01
    Full Transformer Architecture (From Scratch)
     
    • Connect all core transformer components: embeddings, attention, feedforward, normalization
    • Implement skip connections and positional encodings manually
    • Use sanity checks and test loss to debug your model assembly
    • Observe transformer behavior on structured prompts and simple sequences
    • Compare transformer predictions vs earlier trigram or FFN models to appreciate context depth
  • 02
    Advanced RAG & Retrieval Methods
     
    • Analyze case studies on production-grade RAG systems and tools like Relari and Evidently
    • Understand common RAG bottlenecks and solutions: chunking, reranking, retriever+generator coordination
    • Compare embedding models (small vs large) and reranking strategies
    • Evaluate real-world RAG outputs using recall, MRR, and qualitative techniques
    • Learn how RAG design changes based on use case (enterprise Q&A, citation engines, document summaries)
Week 9

Specialized Finetuning Projects

2 Units

  • 01
    CLIP Fine-Tuning for Insurance
     
    • Fine-tune CLIP to classify car damage using real-world image categories
    • Use Google Custom Search API to generate labeled datasets from scratch
    • Apply PEFT techniques like LoRA to vision models and optimize hyperparameters with Optuna
    • Evaluate accuracy using cosine similarity over natural language prompts (e.g. “a car with large damage”)
    • Deploy the model in a real-world insurance agent workflow using LLaMA for reasoning over predictions
  • 02
    Math Reasoning & Tool-Augmented Finetuning
     
    • Use SymPy to introduce symbolic reasoning to LLMs for math-focused applications
    • Fine-tune with Chain-of-Thought (CoT) data that blends natural language with executable Python
    • Learn two-stage finetuning: CoT → CoT+Tool integration
    • Evaluate reasoning accuracy using symbolic checks, semantic validation, and regression metrics
    • Train quantized models with LoRA and save for deployment with minimal resource overhead
Week 10

Advanced RLHF & Engineering Architectures

2 Units

  • 01
    Preference-Based Finetuning — DPO, PPO, RLHF & GRPO
     
    • Learn why base LLMs are misaligned and how preference data corrects this
    • Understand the difference between DPO, PPO, RLHF, and GRPO
    • Generate math-focused DPO datasets using numeric correctness as preference signal
    • Apply ensemble voting to simulate “majority correctness” and eliminate hallucinations
    • Evaluate model learning using preference alignment instead of reward models
    • Compare training pipelines: DPO vs RLHF vs PPO — cost, control, complexity
  • 02
    Building AI Code Agents — Case Studies from Copilot, Cursor, Windsurf
     
    • Reverse engineer modern code agents like Copilot, Cursor, Windsurf, and Augment Code
    • Compare transformer context windows vs RAG + AST-powered systems
    • Learn how indexing, retrieval, caching, and incremental compilation create agentic coding experiences
    • Explore architecture of knowledge graphs, graph-based embeddings, and execution-aware completions
    • Design your own multi-agent AI IDE stack: chunking, AST parsing, RAG + LLM collaboration
Week 11

Agents & Multimodal Code Systems

2 Units

  • 01
    Agent Design Patterns
     
    • Understand agent design patterns: Tool use, Planning, Reflection, Collaboration
    • Learn evaluation challenges in agent systems: output variability, partial correctness
    • Study architecture patterns: single-agent vs constellation/multi-agent
    • Explore memory models, tool integration, and production constraints
    • Compare agent toolkits: AutoGen, LangGraph, CrewAI, and practical use cases
  • 02
    Text-to-SQL and Text-to-Music Architectures
     
    • Implement text-to-SQL using structured prompts and fine-tuned models
    • Train and evaluate SQL generation accuracy using execution-based metrics
    • Explore text-to-music pipelines: prompt → MIDI → audio generation
    • Compare contrastive vs generative learning in multimodal alignment
    • Study evaluation tradeoffs for logic-heavy vs creative outputs
Week 12

Deep Internals & Production Pipelines

2 Units

  • 01
    Positional Encoding + DeepSeek Internals
     
    • Understand why self-attention requires positional encoding
    • Compare encoding types: sinusoidal, RoPE, learned, binary, integer
    • Study skip connections and layer norms: stability and convergence
    • Learn from DeepSeek-V3 architecture: MLA (KV compression), MoE (expert gating), MTP (parallel decoding), FP8 training
    • Explore when and why to use advanced transformer optimizations
  • 02
    LLM Production Chain (Inference, Deployment, CI/CD)
     
    • Map the end-to-end LLM production chain: data, serving, latency, monitoring
    • Explore multi-tenant LLM APIs, vector databases, caching, rate limiting
    • Understand tradeoffs between hosting vs using APIs, and inference tuning
    • Plan a scalable serving stack (e.g., LLM + vector DB + API + orchestrator)
    • Learn about LLMOps roles, workflows, and production-level tooling
Week 13

Enterprise LLMs, Hallucinations & Career Growth

4 Units

  • 01
    RAG Hallucination Control & Enterprise Search
     
    • Explore use of RAG in enterprise settings with citation engines
    • Compare hallucination reduction strategies: constrained decoding, retrieval, DPO
    • Evaluate model trustworthiness for sensitive applications
    • Learn from production examples in legal, compliance, and finance contexts
  • 02
    Career Prep — Roles, Interviews, and AI Career Paths
     
    • Break down roles: AI Engineer, Model Engineer, Researcher, PM, Architect
    • Prepare for FAANG/LLM interviews with DSA, behavioral prep, and project portfolio
    • Use ChatGPT and other tools for mock interviews and story crafting
    • Learn how to build a standout AI resume, repo, and demo strategy
    • Explore internal AI projects, indie hacker startup paths, and transition guides
  • 03
    Staying Current with AI (Research, News, and Tools)
     
    • Track foundational trends: RAG, Agents, Fine-tuning, RLHF, Infra
    • Understand tradeoffs of long context windows vs retrieval pipelines
    • Compare agent frameworks (CrewAI vs LangGraph vs Relevance AI)
    • Learn from real 2025 GenAI use cases: productivity + emotion-first design
    • Stay current via curated newsletters, YouTube breakdowns, and community tools
  • 04
    Bonus Content
     
    • 2 courses - Fundamentals of transformers with Alvin Wan and Responsive LLM Applications with Server-Sent Events
    • Prompt engineering templates
    • AI newsletters, channels, X, reddit channels
    • Break down of LLama components
    • Open source models with their capabilities
    • Data sources
    • AI specific cloud services
    • Open source frameworks
    • Project ideas from other indie hackers
    • Bonus: FANG Machine learning interview cheatsheet
    • Free API Keys for building AI Applications
    • How people will are using GenAI in 2025
    • How to stay ahead of AI Trends?
    • N8N and Free High-Roi AI Automation Templates worth $50,000

Resources

You’ll receive a comprehensive set of resources to help you master large language models.

  • Prompt engineering templates

  • AI newsletters, channels, X, reddit channels

  • Break down of LLama components

  • Break down of Mistral components

Bonus

Unlock exclusive bonuses to accelerate your AI journey.

  • Be able to build large language models, which can increase your salaries by $50k a year. Worth $500k over 10 years.

  • Cheatsheet on generative AI interviews for FANGs, a $50k a year over a $500k value.

  • A complete course on end to end streaming Langchain with a fully functional application for startups. $15k in value.

  • Be able run consulting in AI $100k in annual value. Over 10 years. $1m.

  • Be able to build an AI company $1M in annual value.

  • Technical and business design review from Alvin and Zao about your project. $25000 dollars in value.

Subscribe for a Free Lesson

By subscribing to the newline newsletter, you will also receive weekly, hands-on tutorials and updates on upcoming courses in your inbox.

What Our Students are Saying

Meet the Bootcamp Instructor

Dr. Dipen

Dr. Dipen

I am an AI/ML researcher with 150+ citations and 16 published research papers. I have three tier-1 publications, including Internet of Things (Elsevier), Biomedical Signal Processing and Control (Elsevier), and IEEE Access. In my research journey, I have collaborated with NASA Glenn Research Center, Cleveland Clinic, and the U.S. Department of Energy for various research projects. I am also an official reviewer and have reviewed over 100 research papers for Elsevier, IEEE Transactions, ICRA, MDPI, and other top journals and conferences. I hold a PhD from Cleveland State University with a focus on large language models (LLMs) in cybersecurity, and I also earned a master’s degree in informatics from Northeastern University.

zaoyang

zaoyang

👋 Hi, I’m Zao Yang, a co-founder of Newline, where we’ve deployed multiple generative AI apps for sourcing, tutoring, and data extraction. Prior to this, I co-created Farmville (200 million users, $3B in revenue) and Kaspa (currently valued at $3B). I’m self-taught in generative AI, deep learning, and machine learning, and have helped over 150,000 professionals from companies like Salesforce, Adobe, Disney, and Amazon level up their skills quickly and effectively. In this workshop, I’ll share my experience building AI applications from the ground up and show you how to apply these techniques to real-world projects. Join me to dive into the world of generative AI and learn how to create impactful applications!

Purchase the bootcamp today

One-Time Purchase

AI Bootcamp 1 Link

$11,000
AI Bootcamp
  • Discord Community Access
  • Full Transcripts
  • Project Completion Guarantee
  • Lifetime Access

One-Time Purchase

AI Bootcamp 1 and 2 Link

$13,800
AI Bootcamp
  • Discord Community Access
  • Full Transcripts
  • Project Completion Guarantee
  • Lifetime Access

newline Pro Subscription

$18/MO

Get unlimited access to the course, plus 60+ newline books, guides and courses. Learn More

AI Bootcamp

Billed annually or $40/mo billed monthly. Free to cancel anytime.

  • Discord Community Access
  • Full Transcripts
  • Project Completion Guarantee
  • Lifetime Access

Plus:

  • Unlimited access to 60+ newline Books, Guides and Courses
  • Interactive, Live Project Demos for Every newline Book, Guide and Course
  • Complete Project Source Code for Every newline Book, Guide and Course
  • Best Value 🏆

Frequently Asked Questions

How is this different from other AI bootcamps?

Bootcamps vary widely in scope and depth, generally targeting individuals seeking clear, concrete outcomes. One of the main advantages they offer is the interactive learning environment between peers and instructors. In the AI space, bootcamps typically fall into several categories: AI programming, ML/Gen AI, foundational model engineering, and specific tracks like FAANG foundational model engineering.

Most bootcamps aim to provide specialized skills for a particular career path—like becoming an ML/Gen AI engineer. These programs often cost $15,000–$25,000, run over six months to a year, and involve a rigorous weekly schedule with around four hours of lectures, two hours of Q&A, and an additional 10–15 hours of homework. Traditional coding bootcamps designed to take someone from a non-technical to a technical role are similar in cost and duration.

In contrast, our program offers a unique approach by balancing practical AI programming skills with a deep understanding of foundational model concepts. Many other AI programming bootcamps focus exclusively on specific areas like Retrieval-Augmented Generation (RAG) or fine-tuning and do not delve into foundational model concepts. This can leave students without the judgment and first-principles reasoning needed to understand and innovate with AI at a fundamental level.

Our curriculum is crafted to cover AI programming while incorporating essential foundational model concepts, giving you a well-rounded perspective and the skills to approach AI with a strong theoretical foundation. To my knowledge, few, if any, bootcamps cover foundational models in a way that empowers students to understand the entire AI model lifecycle, adapt models effectively, and confidently pursue project ideas with guided support.

What should I look for in this AI Bootcamp?

This bootcamp offers a comprehensive curriculum covering the entire lifecycle of Large Language Models (LLMs). It balances hands-on programming with theoretical foundations, ensuring you gain practical skills and deep conceptual understanding. Highlights include:

  • Direct mentorship from Alvin Wan (Apple, Tesla, Berkeley) and Zao Yang (Farmville, Kaspa).
  • Hands-on projects like building, deploying, and adapting LLMs.
  • Access to industry-standard tools and frameworks like Huggingface, Modal, and LlamaIndex.
  • Career-focused outcomes such as consulting opportunities, AI startup guidance, and advanced technical skills.

Who is this Artificial Intelligence Bootcamp ideal for?

This bootcamp is tailored for:

  • Professionals aiming to implement AI solutions at work (e.g., RAG or private fine-tuning).
  • Those interested in building vertical foundational models for specific domains.
  • Aspiring consultants or entrepreneurs looking to leverage AI knowledge to create startups or offer services.

What are the eligibility criteria for this AI Bootcamp?

The main criteria are a willingness to learn and a commitment to actively participate. While a basic understanding of programming is helpful, the bootcamp assumes no prior AI or machine learning knowledge.

Are there any required skills or Python programming experience needed before enrolling?

Basic Python programming knowledge is recommended but not mandatory. The bootcamp starts from fundamental concepts and provides all the necessary support to help you succeed.

What is the course structure?

Total Weekly Time Commitment: Approximately 3 hours for structured activities, including 2 hours of lectures and a dedicated 1-hour Q&A office hours session. Hands-On Programming: Expect to dedicate 2–4 hours for practical programming exercises. Individual Project Work: The time spent on your project is up to you, so you can invest as much as you wish to build your skills. Optional Guidance Sessions: We may add an extra 1-hour session for optional guidance on selecting a niche or project topic. Recordings Available: All sessions will be recorded for those unable to attend live, ensuring that no one misses valuable content. Flexible Scheduling: We’ll schedule the live sessions to best accommodate the group.

Do I need any pre-requisite?

Need to be able to program. Need to have a commitment to be able to do the work and ask questions. Some python programming would help just some basic course. You don’t need to do a ML course. We assume nothing for the course.

Anything I need to prepare?

Ideally you think about the project that you want to create. Some people have AI at their work. Some people want to create a vertical foundational model.

Why should I take the Artificial Intelligence Bootcamp from newline?

This bootcamp stands out because:

  • It combines hands-on programming with foundational model concepts, giving you a holistic understanding of AI.
  • It includes real-world applications, guided projects, and personalized mentorship.
  • It guarantees project completion with expert reviews from Alvin Wan and Zao Yang.
  • Flexible scheduling, recordings, and a supportive learning environment make it accessible and effective.

To what extent will the program delve into generative AI concepts and applications?

The curriculum deeply explores generative AI, covering topics like tokenization, transformer models, instruction tuning, and Retrieval-Augmented Generation (RAG). You’ll also learn how to build applications in text, voice, images, video, and multi-modal AI.

Do you have something I can send my manager?

Hey {manager}

There's a course called AI Engineer Bootcamp that I'd love to enroll in. It's a live, online course with peers who are in similar roles to me, and it's run on Newline, where 100,000+ professionals from companies like Salesforce, Adobe, Disney, and Amazon go to level up when they need to learn quickly and efficiently.

A few highlights:

  • Direct access to Alvin Wan, the expert instructor who worked on LLMs at Apple Intelligence.
  • Hands-on working sessions to test new tactics and ideas. Unlike other classes, it teaches the fundamentals of the entire lifecycle of LLMs. This includes being able to understand LLMs and being able to adapt it to specific projects. The course provides a guarantee of being able to build a project. This can apply to a project at work.
  • It also provides the latest thinking in the space on how to solve problems we're facing.

I anticipate being able to put my learnings directly into practice during the course. After the course, I can share the learnings with the team so our entire team levels up.

The course costs USD as an early bird discount or X USD through a payment plan. If you like, you can review course details here, including the instructor’s bio:

https://newline.notion.site/AI-live-cohort-1303f12eb0228088a11dc779897d15bd?pvs=4

What do you think?

Thanks, {Your Name}

Do you have any financing?

We can provide a payment plan. In the future, we’ll have different payment plans, but the payment plan is flexible enough for you.

What are the career outcomes after completing the AI Bootcamp with newline?

Graduates can pursue careers such as:

  • AI engineers with enhanced earning potential (average salary increases of $50k/year).
  • Consultants specializing in AI for enterprises or startups.
  • Entrepreneurs building AI-driven companies.
  • Technical leads in developing and deploying advanced AI solutions.

Will I receive a certificate after completing the AI Bootcamp with newline?

Yes, you will receive a certificate of completion, demonstrating your expertise in AI concepts and applications.

Are there any hands-on projects incorporated into the AI Bootcamp curriculum?

Yes, the curriculum is highly project-focused. You’ll work on building and deploying LLMs, adapting models with RAG and fine-tuning, and applying AI to real-world use cases, ensuring practical experience and a portfolio of projects.

I have a timing issue? What can you do?

You can attend this one and also attend the next one as well. Otherwise, you’ll have to wait till the next cohort.

Do you have a guarantee?

We have a guarantee that we’ll help you be able to build your project. This is that we need to align on the project, the budget, and your time commitment. We’ll need your commitment to be able to work on the project. For example, rag based, fine tuning, building a small foundational model totally within the scope. If you want to build a large foundational model, the project will have to focus on the smaller one first. You’ll have to commit to learn everything needed for the course.

What is the target audience?

The goal is around 3 personas.

  1. Someone wants to apply RAG and instructional fine tuning for private on premise data at work
  2. Someone who wants to be able to fine tune a model to build a vertical foundational model
  3. Someone who wants to be able to use the AI knowledge for consulting and build AI startups.

Will you be covering multi-modal applications?

Yes. We’ll be covering this and learning how to learn within this space as well.

What kind of support and resources are available outside the AI Bootcamp?

You’ll have access to:

  • Direct mentorship from Alvin Wan and Zao Yang.
  • Resources like prompt engineering templates, cheat sheets, and curated datasets.
  • Optional guidance sessions for project topics and niche selection.
  • Recordings of all sessions for flexible learning.

How does the AI Bootcamp at newline stay updated with the latest advancements and trends in the field?

The curriculum reflects cutting-edge developments in AI, informed by the instructors’ active work in the field. Topics like multi-modal LLMs, RAG, and emerging tools are continuously integrated to ensure relevance.

What is the salary of an AI Engineer in the USA?

AI engineers in the USA earn an average salary of $120,000–$200,000 annually, depending on their expertise and experience. Completing this bootcamp can increase your earning potential by $50,000 annually.

Do you offer preparation for Artificial Intelligence interview questions?

Yes, the bootcamp includes a cheatsheet for AI interviews at top companies (e.g., FANG) and guidance for acing technical and business-focused roles in AI.

What are the possible careers in Artificial Intelligence?

AI offers diverse career opportunities, including:

  • AI/ML Engineer
  • Data Scientist
  • AI Consultant
  • Research Scientist
  • AI Startup Founder
  • Product Manager for AI-driven solutions
Newline Image

AI Bootcamp

$9,800