Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Replit Agent - An Introductory Guide

Learn about Replit Agent, an advanced AI-coding agent that’s capable of building apps from scratch. Through natural language interactions and real-time assistance, Replit Agent sets up environments, writes code, and deploys apps, all done within minutes.
Thumbnail Image of Tutorial Replit Agent - An Introductory Guide

    A How to Guide: Prompt Engineering for Reasoning Models

    In our previous article on prompt engineering, we covered the basics of prompt engineering, the difference between reasoning and non-reasoning (traditional) models, and how to prompt traditional models. Today we’re going to focus on reasoning models like DeepSeek-R1 and OpenAi’s o1. To recap a little bit: When using traditional models, the anatomy of a good prompt looks something like this: 1) Goal - the task you want to achieve; 2) Detail - Specifics of how you want it done; 3) Role - the role you want the model to adopt, e.g. “you are a senior software developer who writes clean code”; 4) Format - how you want your output presented, e.g. bullet points, JSON etc.; 5) Examples of input/output; 6) Tone - Conversational tone to adopt when providing the output (professional vs casual etc.); and 7) Context - considerations and supplementary information to take into account, e.g. input documents or codebase to work with.
    Thumbnail Image of Tutorial A How to Guide: Prompt Engineering for Reasoning Models

    I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

    This has been a really good investment!

    Advance your career with newline Pro.

    Only $40 per month for unlimited access to over 60+ books, guides and courses!

    Learn More

    RAG: Bridging the Gap Between AI and Real-Time Data

    Today we often hear about incredible AI advancements that promise to make our lives easier. But besides developing and improving new AI models, we also find new ways to use them and utilize their full potential. One exciting feature of LLMs AI Retrieval-Augmented Generation, or RAG for short. This system connects real time data to the power of AI models. And knowing how RAG works really raises the ceiling of your expertise as an AI engineer. So, in this opening article let's make sure to cover all the core fundamental concepts. And in the upcoming articles we will build exciting applications to apply our knowledge in practice. Large language models (LLMs) generate text by predicting the most probable next word, but without access to real-time or domain-specific information, they produce errors, outdated answers, and hallucinations.
    Thumbnail Image of Tutorial RAG: Bridging the Gap Between AI and Real-Time Data