Tutorials on Node

Learn about Node from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

3 Ways to Optimize Your Development Workflow When Working with React & Node

Find out how to optimize your development workflow when running Create React App and a Node server at the same time.If you’ve ever worked on a project with multiple package.json  files, you might know the pain of juggling multiple terminal tabs, remembering which commands start up which server, or handling CORS errors.  Luckily, there are a few tools available to us that can alleviate some of these headaches. In this post, we’ll go over three things you can do to optimize your development workflow when working on a project with a React front end (specifically, Create React App ) and Node back end. Let's say we're working in a monorepo with two package.json  files — one is in a client directory for a React front end powered by Create React App , and one is in the root of the repo for a Node back end that exposes an API that our React app uses. Our React app runs on localhost:3000 and our Node app runs on localhost:8888 . Both apps are started using npm start .  The directory structure looks something like this: Since we have two package.json  files, this means that to have our front end and back end up and running we need to make sure we've run npm install and npm start in both the root directory and the client directory. Let’s take a look at how we can streamline this. One improvement we can make to our development workflow is adding a build tool to run multiple npm commands at the same time to save us the hassle of running npm start in multiple terminal tabs. To do this, we can add an npm package called Concurrently to the root of our project. At the root of our project, we’ll install it as a dev dependency. Then in our root package.json scripts, we’ll update our start script to use Concurrently. Now, we have three npm scripts. npm run server starts up our Node app, npm run client runs npm start in the client directory to start up our React app, and npm start runs both npm run server and npm run client at the same time. Here’s what it should look like now when running npm start in the root of our directory. Another aspect of our workflow we can improve is dependency installation. Currently, we need to manually run  npm install for each package.json  file we have when setting up the project. Instead of going through that hassle, we can add a postinstall script to our root package.json to automatically run npm install in the client directory after installation has finished in the root directory. Now, when we install our monorepo, all we need to do to get up and running is run npm install then npm start at the root of the project. No need to cd into any other directories to run other commands. As we mentioned above, our Node back end exposes API endpoints that are used by our React app. Let’s say our Node app has a /refresh_token  endpoint. Out of the box, if we tried to send a GET request to http://localhost:8888/refresh_token from our React app on http://localhost:3000 , we would run into CORS issues. CORS stands for Cross-Origin Resource Sharing . Usually, when you encounter CORS errors, it's because you are trying to access resources from another domain (i.e. http://localhost:3000  and http://localhost:8888 ), and the domain you're requesting resources from is not permitted. To tell the development server to proxy any unknown requests to our API server in development, we can set up a proxy in our React app's package.json file. In client/package.json , we’ll add a proxy for http://localhost:8888 (where our Node app runs). Now, if we restart the server and set up a request to our Node app's /refresh_token endpoint (without the http://localhost:8888 ) using fetch() , the CORS error should be resolved. The next time you work on a monorepo project like this, try out these three tips to streamline your development workflow! Be sure to check out our new course being released soon, Build a Spotify Connected App , where we apply these concepts to build a real-world, full stack web app!

Thumbnail Image of Tutorial 3 Ways to Optimize Your Development Workflow When Working with React & Node

Deploying a Node.js and PostgreSQL Application to Heroku

Serving a web application to a global audience requires deploying, hosting and scaling it on reliable cloud infrastructure. Heroku is a cloud platform as a service (PaaS) that supports many server-side languages (e.g., Node.js, Go, Ruby and Python), monitors application status in a beautiful, customizable dashboard and maintaining an add-ons ecosystem for integrating tools/services such as databases, schedulers, search engines, document/image/video processors, etc. Although it is built on AWS, Heroku is simpler to use compared to AWS. Heroku automatically provisions resources and configures low-level infrastructure so developers can focus exclusively on their application without the additional headache of manually setting up each piece of hardware and installing an operating system, runtime environment, etc. When deploying to Heroku, Heroku's build system packages the application's source code and dependencies together with a language runtime using a buildpack and slug compiler to generate a slug , which is a highly optimized and compressed version of your application. Heroku loads the slug onto a lightweight container called a dyno . Depending on your application's resource demands, it can be scaled horizontally across multiple concurrent dynos. These dynos run on a shared host, but the dynos responsible for running your application are isolated from dynos running other applications. Initially, your application will run on a single web dyno, which serves your application to the world. If a single web dyno cannot sufficiently handle incoming traffic, then you can always add more web dynos. For requests exceeding 500ms to complete, such as uploading media content, consider delegating this expensive work as a background job to a worker dyno. Worker dynos process these jobs from a job queue and run asynchronously to web dynos to free up the resources of those web dynos. Below, I'm going to show you how to deploy a Node.js and PostgreSQL application to Heroku. First, let's download the Node.js application by cloning the project from its GitHub repository: Let's walkthrough the architecture of our simple Node.js application. It is a multi-container Docker application that consists of three services: an Express.js server, a PostgreSQL database and pgAdmin. As a multi-container Docker application orchestrated by Docker Compose , the PostgreSQL database and pgAdmin containers are spun up from the postgres and dpage/pgadmin4 images respectively. These images do not need any additional modifications. ( docker-compose.yml ) The Express.js server, which resides in the api subdirectory, connects to the PostgreSQL database via the pg PostgreSQL client. The module api/lib/db.js defines a Database class that establishes a reusable pool of clients upon instantiation for efficient memory consumption. The connection string URI follows the format postgres://[username]:[password]@[host]:[port]/[db_name] , and it is accessed from the environment variable DATABASE_URL . Anytime a controller function (the callback argument of the methods app.get , app.post , etc.) calls the query method, the server connects to the PostgreSQL database via an available client from the pool. Then, the server queries the database, directly passing the arguments of the query method to the client.query method. Once the database sends the requested data back to the server, the client is released back to the pool, available for the next request to use. Additionally, there's a getAllTables method for retrieving low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . ( api/lib/db.js ) The table cp_squirrels is seeded with records from the 2018 Central Park Squirrel Census dataset downloaded from the NYC Open Data portal. The dataset, downloaded as a CSV file, contains the fields obs_date (observation date) and lat_lng (coordinates of observation) with values that are not compatible with the PostgreSQL data types DATE and POINT respectively. Instead of directly copying the contents of the CSV file to the cp_squirrels table, copy from the output of a GNU awk ("gawk") script. This script... ( db/create.sql ) Upon the initialization of the PostgreSQL database container, this SQL file is ran by adding it to the docker-entrypoint-initdb.d directory. ( db/Dockerfile ) This server exposes a RESTful API with two endpoints: GET /tables and POST /api/records . The GET /tables endpoint simply calls the db.getAllTables method, and the POST /api/records endpoint retrieves data from the PostgreSQL database based on a query object sent within the incoming request. To bypass CORS restrictions for clients hosted on a different domain (or running on a different port on the same machine) sending requests to this server, all responses must have the Access-Control-Allow-Origin header set to the allowable domain ( process.env.CLIENT_APP_URL ) and the Access-Control-Allow-Headers header set to Origin, X-Requested-With, Content-Type, Accept . ( api/index.js ) Notice that the Express.js server requires three environment variables: CLIENT_APP_URL , PORT and DATABASE_URL . These environment variables must be added to Heroku, which we will do later on in this post. The Dockerfile for the Express.js server instructs how to build the server's Docker image based on its needs. It automates the process of setting up and running the server. Since the server must run within a Node.js environment and relies on several third-party dependencies, the image must be built upon the node base image and install the project's dependencies before running the server via the npm start command. ( api/Dockerfile ) However, because the filesystem of a Heroku dyno is ephemeral , volume mounting is not supported. Therefore, we must create a new file named Dockerfile-heroku that is dedicated only to the deployment of the application to Heroku and not reliant on a volume. ( api/Dockerfile-heroku ) Unfortunately, you cannot deploy a multi-container Docker application via Docker Compose to Heroku. Therefore, we must deploy the Express.js server to a web dyno with Docker and separately provision a PostgreSQL database via Heroku Postgres add-on . To deploy an application with Docker, you must either: For this tutorial, we will deploy the Express.js server to Heroku by building a Docker image with heroku.yml and deploying this image to Heroku. Let's create a heroku.yml manifest file inside of the api subdirectory. Since the Express.js server will be deployed to a web dyno, we must specify the Docker image to build for the application's web process, which the web dyno belongs to: ( api/heroku.yml ) Because our api/Dockerfile already has a CMD instruction, which specifies the command to run within the container, we don't need to add a run section. Let's add a setup section, which defines the environment's add-ons and configuration variables during the provisioning stage. Within this section, add the Heroku PostgreSQL add-on. Choose the free " Hobby Dev " plan and give it a unique name DATABASE . This unique name is optional, and it is used to distinguish it from other Heroku PostgreSQL add-ons. Fortunately, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable, which contains the database connection information for this newly provisioned database, will be made available to our application. Check if your machine already has the Heroku CLI installed. If not yet installed, then install the Heroku CLI. For MacOSX, it can be installed via Homebrew: For other operating systems, follow the instructions here . After installation, For the setup section of the heroku.yml manifest file to be recognized and used for creating a Heroku application, switch to the beta update channel and install the heroku-manifest plugin: Without this step, the PostgreSQL database add-on will not be provisioned from the heroku.yml manifest file. You would have to manually provision the database via the Heroku dashboard or heroku addons:create command. Once installed, close out the terminal window and open a new one for the changes to take effect. Note : To switch back to the stable update stream and uninstall this plugin: Now, authenticate yourself by running the follow command: Note : If you want to remain within the terminal, as in entering your credentials directly within the terminal, then add the -i option after the command. This command prompts you to press any key to open a login page within a web browser. Enter your credentials within the login form. Once authenticated, Heroku CLI will automatically log you in. Within the api subdirectory, create a Heroku application with the --manifest flag: This command automatically sets the stack of the application to container and sets the remote repository of the api subdirectory to heroku . When you visit the Heroku dashboard in a web browser, this newly created application is listed under your "Personal" applications: Set the configuration variable CLIENT_APP_URL to a domain that should be allowed to send requests to the Express.js server. Note : The PORT environment variable is automatically exposed by the web dyno for the application to bind to. As previously mentioned, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable will automatically be exposed. Under the application's "Settings" tab in the Heroku Dashboard, you can find all configuration variables set for your application under the "Config Vars" section. Create a .gitignore file within the api subdirectory. ( api/.gitignore ) Commit all the files within the api subdirectory: Push the application to the remote Heroku repository. The application will be built and deployed to the web dyno. Ensure that the application has successfully deployed by checking the logs of this web dyno: If you visit https://<application-name>.herokuapp.com/tables in your browser, then a successful response is returned and printed to the browser. In case the PostgreSQL database is not provisioned, manually provision it using the following command: Then, restart the dynos for the DATABASE_URL environment variable to be available to the Express.js server at runtime. Deploy your own containerized applications to Heroku!

Thumbnail Image of Tutorial Deploying a Node.js and PostgreSQL Application to Heroku

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Server Side Rendering with React

In this article, we will learn how to render a simple React app using an express server.Server Side Rendering (SSR) can be a useful tool to improve your application's load time and optimise it for better discoverability by search engines. React provides full support for SSR via the react-dom/server module, and we can easily set up a Node.js based server to render our React app. In the following sections, we will create a simple counter app with React and render it server-side using an express backend. Let us use create-react-app to generate a stock React app. We'll call our app ssr-example . Let us modify the App.js file to implement a simple counter component that displays the current count. It will also render buttons with which the counter can be incremented and decremented. In our app's index.js file, we have the following line which tells React where to render our app. This needs to be slightly modified to work with SSR. To be able to render our app, we must first compile it so that an index.html and the compiled JavaScript is available. You can build the app with the following command: We will use express.js to set up a simple server for our app. You can install it using the following command in your project folder: Since the server needs to be able to render JSX, we will also need to add some babel dependencies. We will also install ignore-styles since we do not want to compile CSS. Let us create a server using the express module we have just installed. To start, create a folder called server in your project folder, and create a server.js file within it like so: We have just defined an express app that will listen on port 8000 when started. With the app.use() statement, we have also set up a handler for all requests to routes matching the ^/$ regular expression. In the next step, we will add code in the handler to render our app. But before we move on to that, we will need to configure our babel dependencies to work with the server we have just defined. To do so, create an index.js file in the server folder with the following code that imports the required dependencies, and calls @babel/register : Let us now add the code that actually renders our app. For this, we will use the fs module to access the file system and fetch the index.html file for our app. If there is an error reading the file, we will return a 500 status code with an error message. Otherwise, we can proceed with the rendering. The index.html has a placeholder element, usually a div with the ID root where it renders the React app. We will use the renderToString function from react-dom/server to render our App component as a string, and append it to the placeholder div . And that is pretty much it! We're now just one step away from being able to get this up and running. Let us add an ssr script to our package.json file to run the server. You can now start the server from your terminal with the command yarn ssr . When you navigate to http://localhost:8000 in your browser, you will see the app rendered as before. The only difference will be that the server responds back with the rendered HTML this time around. We have now learnt how to implement Server Side Rendering with a React app using a simple express server. The code used in this article is available at https://github.com/satansdeer/ssr-example , and a video version of this article can be found at https://www.youtube.com/watch?v=NwyQONeqRXA .

Thumbnail Image of Tutorial Server Side Rendering with React

An elegant guide to Sequelize and Node.js

Sequelize is a promise-based SQL ORM for Node.js, with support to Postgres, MySQL, MariaDB, SQLite, and Microsoft SQL! During this tutorial, we will go through the creation of a simple library database using Sequelize and Node.js system. Models are the soul of Sequelize, we use them to represent the data of our tables, both on a row-level as a model instance, or as a table structure level as a model. Let's create our first model to represent a book in our library. But first using NPM we need to install the following libraries: Let's create a db.js file, to initialize our Sequelize database connection Now, let's create a book.js file to store our model Now let's add an index.js file to serve as our application entry point. Here we will have a main function that we will use to create our first object in the database. Let's have some fun with some crud operations inside our main function. Let's list all the books in our database: Here is the data I got in my DB after running the findAll query: To delete an entry in the database, we use .destroy() method: To update a model we use .update() method: To drop an entire table, we would use the .drop() method: The way we index our data in our database is essential for the functionality of our system, a good index will allow you to properly retrieve data faster, for example, the 2 following queries, gets us the same data The difference is that, since the latter is using the primary Key index, to search for the element will be faster. Models can relate to each other. Let's say we have an author model that relates to the book model where a book could have a single author and an author could have multiple books. Let's create the Author.js file: And we need to update our Book.js file in the following manner to include the authorId foreign key: These are one too many relations. In Sequelize we use the method .belongsTo() and .hasMany() to properly define the relation. In our index.js lets do the following: We've been through a lot within this short post but with these few examples, you should have a good grasp on the core functionalities of Sequelize. Sequelize is a really potent tool for working with SQL databases from within Node.js. If you want to keep going deeper into it, I recommend following it up with learning about Sequelize transactions and migrations. Have fun and keep coding! Check out the documentation of the modules we used in this post: If you have any questions - or want feedback on your post - come join our community Discord . See you there!

Getting Started with Jamstack

"Jamstack" is an architecture for web apps that promises to be faster and easier to create than the traditional client/server architecture. In this article, we'll take a dive into the Jamstack architecture to better understand how it works and the advantages it offers. We'll also build and deploy a very simple Jamstack app using Eleventy and deploy it to Netlify . You should be familiar with: For the practical part of this article, you'll need to have Node & NPM installed on your computer. The examples included assume a Linux-based system. Rather than serving pages from a web server, Jamstack sites are static pages served directly from a CDN which allows for greater speed and security. Interactivity and personalization can be provided by JavaScript, which can be used for HTTP connections to API services like cloud databases, payment gateways, content management, etc. Rather than having a backend web app serving dynamic pages by generating HTML on the fly, Jamstack apps consist of pre-rendered static pages. Since most developers wouldn't want to write their app in HTML, a static-site generator is usually employed. Since Jamstack sites consist only of static assets, this means the site can be served by a distributed file-serving network aka a Content Delivery Network (CDN). Without any web server to provide functionality, how can a Jamstack site be as dynamic as a traditional web app? For example, how can you personalize your site without a database with customer data to populate authenticated pages? Most dynamic functionality in a Jamstack app will be provided by JavaScript running in the user's browser. With the emergence of the single-page application app architecture provided by JavaScript frameworks like React, Angular, and Vue, frontend apps can simulate multi-page sites without the need for a web server. Recently, there here have sprung up many companies providing API-based services including cloud databases (e.g. Hasura , Fauna ), cloud computation (e.g. AWS Lambda ), identity management (e.g. Auth0 ), eCommerce (e.g. Snipcart ), payment processing (e.g. Stripe ) and so on. These API services can all be connected to static apps using AJAX requests during the app lifecycle, thereby greatly enhancing the possible feature set for a Jamstack app. While the idea of Jamstack apps has existed in one form or another for quite a while, Jamstack has only recently become a viable and attractive architecture as the evolution of the ecosystem permitted it. For example, without modern JavaScript frameworks and the emergence of API services, Jamstack apps would be severely lacking in functionality when compared to traditional apps. Let's look at some of the advantages Jamstack provides that make it an attractive option. Since a Jamstack app does not have a backend server, it can be quite cheap to host. This is because backend apps usually require a web server to be listening all the time for requests, whereas CDN file serving is done with a centralized web server that you won't have to pay as much for. Jamstack apps will often be faster, since CDN content is served at the network edge. This means that your static files may be served from data centers around the world that are physically closer to your users, reducing latency of requests. While a web server can also be distributed it will still likely be slower than an optimized CDN. Even more importantly, static files are computationally much easier to serve than rendering HTML on the fly, and therefore can be served significantly faster. Since there is no backend server, Jamstack apps can be more secure than traditional apps since there are much fewer exploitable public interfaces. This means that you, as the developer, don't need to be concerned about server security, authentication management, database exploits etc, which are all sources of vulnerability. Also, CDNs can provide DDoS mitigation since the content is distributed across a network, providing another way in which Jamstack can be more secure. It should be noted that Jamstack apps still have security vulnerabilities - just less than traditional apps. For example, your serverless functions and API calls to cloud services can still introduce security holes. Of course, Jamstack is not a silver bullet and it may not always be the superior solution. Getting rid of the web server definitely provides advantages, but there are downsides too. It might be argued that it's harder to develop Jamstack apps, especially for developers who are traditionally backend developers. Also, Jamstack can't take advantage of the enormous ecosystem of backend software that makes web development easier, including frameworks like WordPress, or Laravel. It's also worth noting that most of the advantages of Jamstack are accrued for apps with many users. If you're building an internal tool, like an admin dashboard, you won't get near as many advantages of Jamstack, and it may be wise to stick with a traditional web server solution. Now we're clear on what Jamstack apps are and why you might create one. The question now, is, how do you make them? The center-piece of a Jamstack app is usually the static-site generator. This is a framework that will turn your content into static files ready to be deployed. These can range from simply (e.g. Eleventy) to more complex (e.g. Next.js). On the simple end, the generator might be little more than a CLI tool that turns markdown into HTML. On the complex end, the generator may allow you to create a feature-rich single-page app to present your static content. Examples of static-site generators include Next.js, Gatsby, Nuxt.js, Eleventy, and Jekyll. As discussed, static sites are usually hosted on a CDN. While you could use a CDN service directly, like Cloudflare , or AWS S3, it's easier to use a static hosting provider like Netlify or Vercel . The advantage these services offer is that they're geared for Jamstack sites (indeed, the term "Jamstack" was coined by Mattias Billman, CEO of Netlify) and include many useful features that will help in deploying static sites. For example, both Netlify and Vercel provide a Git server that you can push to to trigger a build of your site, environment variable management, serverless functions, CI, and more. Let's see how to build a simple Jamstack site - a blog. To do this, we'll use the Eleventy static-site generator and deploy it to Netlify. To begin with, let's create a folder for our project and change into it. Let's now create a markdown file which will be used for the home page content: We'll then add some content to the file: One of the advantages of Eleventy is that it is very simple to use. In fact, with just one file we can create a build! We'll use npx (a CLI tool packaged with NPM that allows you to run a package without installing it) and call the Eleventy binary which will create a build: Eleventy will generate a file _site/index.html that looks like this: Of course, this is not a valid HTML page since it's missing document tags. So let's create a layout that Eleventy will use to render our markdown file with. The liquid templating language is one that Eleventy can read and provides a very simple way to provide layouts. Here's the content you should add to your layout file: Let's add some frontmatter to the top of the file to allow for some configuration. We'll add a layout property which tells Eleventy the layout template to use, as well as a title property. Run Eleventy again, and here's the updated content you'll see for _site/index.html: While very simple, this is a perfectly valid static site! Let's use Netlify to deploy this site. First, register for a free Netlify account at https://netlify.com . Then, go to https://app.netlify.com , and drag the _site folder generated by Eleventy into the browser window. Netlify will automatically create a new app and deploy the static files. You can then view your site at https://[your-site].netlify.app! In this article, you've learned about the Jamstack architecture and should now have an understanding of how it works and the advantages it offers. You've also built and deployed a very simple static blog using Eleventy and Netlify. If we connected an API service to this app, we'd have a bonafide Jamstack app. To continue your discovery of Jamstack, I recommend the following next steps:

Using Node.js in Angular and TransferState

In this post, we're going to learn how to use Node.js in your Angular app and pass data between the backend and the frontend . Angular Universal provides a tool called TransferState that will help you prevent duplicate data fetching -- it's super handy and a bit advanced. Below is a tutorial that introduces the ideas of TransferState and how to use it, but if you want a step-by-step walkthrough with complete code, then grab a copy of the course . The course is about 270+ pages over 40 lessons that teach every step to creating blazing fast Angular apps. Angular Universal is a powerful tool to address the SEO pitfall of Single Page Applications. Unfortunately, it's not a magic wand, and sometimes you need to adjust your application to make it efficient. Such a problem occurs when your application performs an HTTP call (to the back-end) during the initialization. Does it make sense to perform that call as well in the back-end as in the front-end? Let's assume we're building a task application and we have a text-file tasks.json on our server, with the following in it: We could implement a back-end endpoint that will read the file content and send it back to the requester, e.g. from our express server: And then load this file via an HTTP request. But with Universal, we can take this one step further and Of course, we don't want Angular to perform HTTP calls when running on the server when we don't have to. And so with Universal we can use Node.js APIs like fs and just read the file directly . Here's how we could create the TasksService equivalent on the server: It's okay if you do a double-take on the above: isn't this Angular code? Yes it is, but that's the sort of thing we're able to do with Universal. Let's say we want to use this TaskService in a component. Here's what it could look like: Now if we were to load this page and view the Network tab, we'd see that we're still making a call to /tasks.json - but we don't need to! It would be great to introduce a mechanism that passes such data along with the application bundle. That's what TransferState is for. To use TransferState we first create StateKey s to identify the data we want to transfer to the browser: Then we update the getTasks() method to set entry to the TransferState registry when data is read from the filesystem: From now, the TransferState registry will contain the TASKS entry. The registry will be passed along with the rendered view to the browser! Now on the browser side, we need to configure it to read from the StateKey s if they exist. To do this, we inject the TransferState service, and then update getTasks() to retrieve data from the TransferState registry or fall back to an HTTP request when the registry is empty: Now if we navigate to our app, we can verify in the Network tab of the Developer Tools : And if we hit the refresh list button and we would see that the new list is retrieved via HTTP, rather than from the TransferState . Above is a quick introduction to the ideas, but of course, there are specific things you need to do to implement it for real. We walk through those details in the course. You can grab your copy here OR as part of a newline pro membership .

Thumbnail Image of Tutorial Using Node.js in Angular and TransferState

Deep dive into the Node HTTP module

An in-depth look at the Node HTTP module, and how to use it to scale up!The HTTP module is a core module of Node.js, it is fair to say it's one of the biggest responsible for Node's initial rise in popularity. HTTP is the veins of the internet, this website, any site you explore, you use the HTTP protocol to request it from a server, and the server uses the same HTTP protocol to send you back the data you requested. Let's import the module and create a basic HTTP server with it We just created an HTTP Server object, some of its methods are: For a complete list of HTTP server class properties and methods, check out the official docs . In the example below, let's use the callback function to handle HTTP requests and respond to them. req : shorthand for request, is an object from the class   IncomingMessage  that includes all the request information. Some interesting properties are: It's also good to remember that the IncomingMessage extends the <stream.Readable> class, so each request object is, indeed, a stream. res : shorthand for "response", is an object from the class ServerResponse, which contains a collection of methods to send back an HTTP response. Some of the methods are: Each time we write response data with .write(), the chunk of data we passed gets immediately flushed to the kernel buffer. If we try to .write() after .end() has been called, we will get the following error: In an HTTP post request, we usually get a body as part of the request. How do we access it using the Node HTTP module? Remember the request object is an instance of the IncomingMessage class which extends the <stream.Readable> class, in post request we can access the body as a stream of data like this: You could use an application like Postman , to launch the HTTP request to our application, and you would end up with something like this: While you are in Postman, be sure that you are firing POST request. You will also need to set the following configuration in order to set the 'content-type' headers as 'application-json' To handle HTTPS requests, Node.js has the core module https , but first, we need some SSL certificates, for the purpose of this example let's generate some self-signed certificates In your command line (use GitBash if you are on windows) lets run Now let's use the HTTPS module to create an HTTP server The main difference is we now have to read the key files and load them into an options object, that we pass to .createServer() to create our new shiny HTTPS server. Sometimes we would like to do an HTTP request in order to gather data from a third-party HTTP server. We can achieve this by using the Node HTTP module .request() function. In the following example, we will be calling the postman-echo API which returns whatever we send them. But I would suggest instead of using the core HTTP module for sending requests, you use something more sophisticated and user friendly as Axios . The pros of using something like Axios, is promise abstraction, easier to manage errors on requests, and support for really valuable plugins, like Axios retry. In a lot of situations, we can use a framework like Express to create a server instead of doing directly to the HTTP module. Install express using npm in your command line Express will give you a more elegant way to handle all your API routes, handle session data, and will provide you some plugins for authentication, Express is gonna make your life way easier! We've been through a lot, within this short post, but with these few examples, you should have a good grasp on the core functionalities of the HTTP Node core module. It's ideal to understand the inner functionalities of the HTTP module, but for more complex tasks is recommended to use an abstraction library, such as express in the case you are building servers, or Axios in the case you are creating HTTP requests. Have fun and keep coding! Check out the documentation of the modules we used in this post: If you have any questions - or want feedback on your post - come join our community Discord . See you there!

Thumbnail Image of Tutorial Deep dive into the Node HTTP module

How to Convert JSON to CSV in Node.js

In this article, you'll learn how to create a Node CLI tool to convert a JSON file into a valid CSV file.JSON has become the most popular way to pass data around on the modern web, with almost universal support between APIs and server applications. However, the dominant data format in spreadsheet applications like Excel and Google Sheets is CSV, as this format is the tersest and easiest for users to understand and create. A common function that backend apps will need to perform, therefore, is the conversion of JSON to CSV. In this tutorial, we'll create a CLI script to convert an input JSON file to an output CSV file. If you don't need to customize the conversion process, it's quicker to use a third-party package like json-2-csv. At the end of the tutorial, I'll also show you how to use this instead of a custom converter. To get the most out of this article you should be familiar with the basics of Node.js, JavaScript, and async/await. You'll need a version of Node.js that includes fs.promises and supports async/await and also have NPM installed. I recommend Node v12 LTS which includes NPM 6. Let's now get started making a Node script that we can use to convert a JSON file to CSV from the command line. First, we'll create the source code directory, json-to-csv . Change into that and run npm init -y so that we're ready to add some NPM packages. Let's now create am example JSON file that we can work with called input.json . I've created a simple data schema with three properties: name, email, and date of birth. It'd be very handy to allow our utility to take in a file name input and file name output so that we can use it from the CLI. Here's the command we should be able to use from within the json-to-csv directory: So let's now create an index.js file and install the yargs package to handle CLI input: Inside index.js , let's require the yargs package and assign the argv property to a variable. This variable will effectively hold any CLI inputs captured by yargs. Nameless CLI arguments will be in an array at the _ property of argv . Let's grab these and assign them to obviously-named variables inputFileName and outputFileName . We'll also console log the values now to check they're working how we expect: For file operations, we're going to use the promises API of the fs package of Node.js. This will make handling files a lot easier than using the standard callbacks pattern. Let's do a destructure assignment to grab the readFile and writeFile methods which are all we'll need in this project. Let's now write a function that will parse the JSON file. Since file reading is an asynchronous process, let's make it an async function and name it parseJSONFile . This method will take the file name as an argument. In the method body, add a try / catch block. In the try , we'll create a variable file and assign to this await readFile(fileName) which will load the raw file. Next, we'll parse the contents as JSON and return it. In the catch block, we should console log the error so the user knows what's gone wrong. We should also exit the script by calling process.exit(1) which indicates to the shell that the process failed. We'll now write a method to convert the JavaScript array returned from the parseJSONFile to a CSV-compatible format. First, we're going to extract the values of each object in the array, discarding the keys. To do this, we'll map a new array where each element is itself an array of the object values. Next, we'll use the array unshift method to insert a header row to the top of the data. We'll pass to this the object keys of any one of the objects (since we assume they all have the same keys for the sake of simplicity). The last step is to convert the JavaScript object to CSV-compatible string. It's as simple as using the join method and joining each object with a newline ( \n ). We're not quite finished - CSV fields should be surrounded by quotes to escape any commas from within the string. There's an easy way to do this: It's fairly trivial now to write our CSV file now that we have a CSV string - we just need to call writeFile from an async method writeCSV . Just like in the parse method we'll include a try / catch block and exit on error. To run our CSV converter we'll add an IIFE to the bottom of the file. Within this function, we'll call each of our methods in the sequence we wrote them, passing data from one to the next. At the end, let's console log a message so the user knows the conversion process worked. Let's now try and run the CLI command using our example JSON file: It works! Here's what the output looks like: There's a fatal flaw in our script: if any CSV fields contain commas they will be made into separate fields during the conversion. Note in the below example what happens to the second field of the last row which includes a comma: To fix this, we'll need to escape any commas before the data gets passed to the arrayToCSV method, then unescape them afterward. We're going to create two methods to do this: escapeCommas and unescapeCommas . In the former, we'll use map to create a new array where comma values are replaced by a variable token . This token can be anything you like, so long as it doesn't occur in the CSV data. For this reason, I recommend something random like ~~~~ or !!!! . In the unescapeCommas method, we'll replace the token with the commas and restore the original content. Here's how we'll modify our run function to incorporate these new methods: With that done, the convertor can now handle commas in the content. Here's the real test of our CLI tool...can we import a converted sheet into Google Sheets? Yes, it works perfectly! Note I even put a comma in one of the fields to ensure the escape mechanism works. While it's good to understand the underlying mechanism of CSV conversion in case we need to customize it, in most projects, we'd probably just use a package like json-2-csv . Not only will this save us having to create the conversion functionality ourselves, but it also has a lot of additional features we haven't included like the ability to use different schema structures and delimiters. Let's now update our project to use this package instead. First, go ahead and install it on the command line: Next, let's require it in our project and assign it to a variable using destructuring: We can now modify our run function to use this package instead of our custom arrayToCSV method. Note we no longer need to escape our content either as the package will do this for us as well. With this change, run the CLI command again and you should get almost the same results. The key difference is that this package will only wrap fields in double-quotes if they need escaping as this still produces a valid CSV. So now you've learned how to create a CLI script for converting JSON to CSV using Node.js. Here's the complete script for your reference or if you've skimmed the article:

Thumbnail Image of Tutorial How to Convert JSON to CSV in Node.js

Formatting Dates in Node with Moment.js

Working with dates in a Node app can be tricky as there are so many ways to format and display them. The APIs available in native JavaScript are way too tedious to use directly, so your best option is to use a date/time library. The best known and most flexible option is Moment.js . For this article, I'll presume you understand the basics of JavaScript and Node.js. To install Moment you should have Node and NPM installed on your machine. Any current version will do, but if you're installing from scratch, I'd recommend using the Node v12 LTS which includes NPM 6. Other than for simple use cases and one-offs, the JavaScript Date API will be too low-level and will require you to write many lines of code for what seems like a simple operation. Moment is an incredibly flexible JavaScript library that wraps the Date API giving you very convenient helper methods allowing you to easily perform tasks like: Moment can be used in either the browser or on a Node.js server. Let's begin by going to the terminal and installing Moment: With that done, we can now require the Moment library in a Node.js project: The first thing we'll do to use Moment is to create a new instance by calling the moment method. So what is a Moment instance, and what exactly has been assigned to the variable m in the snippet above? Think of the Moment instance as a wrapper object around a single, specific date. The wrapper provides a host of API methods that will allow you to manipulate or display the date. For example, we can use the add method which, as you'd expect, allows you to add a time period to a date: Note that Moment provides a fluent API , similar to jQuery. It's called "fluent" because the code will often read like a sentence. For example, read this line aloud and it should be immediately obvious what it does: Another aspect of fluent API methods is that they return the same object allowing you to chain additional methods for succinct and easy to read code. We said above that Moment wraps a single, specific date. So how do we specify the date we want to work with? Just like with the native JavaScript Date object, if we don't pass anything to the moment method when we first call it, the date associated with the instance will be the current date i.e. "now". What if we want to create a Moment instance based on some fixed date in the past or future? Then we can pass the date as a parameter to moment . There are several ways to do this depending on your requirements. You may be aware of some of the standards for formatting date strings (with unfortunate names like "ISO 8601" and "RFC 2822"). For example, my birthday formatted as an ISO 8601 string would look like this: "1982-10-25T08:00:15+10:00"; . Since these standards are designed for accurately communicating dates, you'll probably find that your database and other software will provide dates in one of these formats. If your date is formatted as either ISO 8601 or RFC 2822, Moment is able to automatically parse it. If your date string is not formatted using one of these standards, you'll need to tell Moment the format you're using. To do this, you supply a second argument to the moment method - a string of format tokens . Most date string formats are specified using format token templates. It's easiest to explain these using an example. Say we created a date in the format "1982-10-25". The format token template representing this would be "YYYY-MM-DD". If we wanted the same date it in the format "10/25/82" the template would be "MM/DD/YY". Hopefully, that example makes it clear that the format tokens are used for a unique date property e.g. "YYYY" corresponds to "1982", while "YY" is just "82". Format tokens are quite flexible and even allow us to create non-numeric values in dates like "25th October, 1982" - the format token string for this one would be "Do MMMM, YYYY" (note that including punctuation and other non-token values in the template is perfectly okay). For a complete list of date format tokens, see this section of the Moment docs . So, if you want to use non-standard formatting for a date, supply the format token template as the second argument and Moment will use this to parse your date string. For example, shorthand dates in Britain are normally formatted as "DD-MM-YYYY", while in America they're normally formatted as "MM-DD-YYYY". This means a date like the 10th of October, 2000 will be ambiguous (10-10-2000) and so Moment cannot parse it accurately without knowing the format you're using. If we provide the format token template, however, Moment knows what you're looking for: A Unix timestamp is a way to track time as a running total of seconds starting from the "epoch" time: January 1st, 1970 at UTC. For example, the 10th of October, 2000 is 971139600 seconds after the epoch time, therefore that value is the Unix timestamp representing that date. You'll often see timestamps used by operating systems as they're very easy to store and operate on. If you pass a number to the moment method it will be parsed as a Unix timestamp: Now we've learned how to parse dates with Moment. How can we display a date in a format we like? The API method for displaying a date in Moment is format and it can be called on a moment instance: As you can see in the console output, Moment will, by default, format a date using the ISO 8601 format. If we want to use a different format, we can simply provide a formatted token string to the method. For example, say we wanted to display the current date in the British shorthand "DD-MM-YYYY": Here's another example showing a more reader-friendly date: While it's the most popular date library, Moment is certainly not the only option. You can always consider an alternative library if Moment doesn't quite fit your needs. One complaint developers have about Moment is that the library is quite large (20kB+). As an alternative, you might try Day.js which is a 2kB minimalist replacement for Moment.js, using an almost identical API. Another complaint is that the Moment object is mutable, which can lead to confusing code. One of the maintainers of Moment has released Luxon , a Moment-inspired library that provides an immutable and unambiguous API. So now you know how to format dates using Moment and Node. You understand how to work with the current date, as well as how to use standard and non-standard date strings. You also know how to display a date in whatever format you like by supplying a format token template. Here's a snippet summarizing the main use case: Parsing and displaying dates just scratches the surface of Moment's features. You can also use it to manipulate dates, calculate durations, convert between timezones, and more. If you'd like to learn more about these features I'd recommend you start with the Moment docs:

Node.js Tutorial: How JavaScript on the backend can make your life easier.

Node.js is JavaScript on the backend, built around the highly optimized google's V8 javascript engine. Welcome to the world of asynchronous non-blocking programming. Node.js excels at: What Node isn't that good at: For this article, we assume that you're familiar with JavaScript basics, such as working with promises Node.js is available for most platforms. If you are using brew on macOS you can install it as follows: For Windows, Linux, macOS and other operating systems, you can download the Node binaries from here Or you could use NodeSource to get your platform-specific binaries. In an empty directory create a file named app.js with a single line of code as follows To run it, in your shell of preference just CD to the directory, and type: And just as magic you will get an output in the console saying hello. The main point to notice here is that this is JavaScript we output to the console using mostly the same interface that we would use in the browser, and also, that immediately after that line of code executed, our application ends. Let's use the Node.js standard File System module fs , to write our hello world into a file: Let's take a moment, to explore in-depth the lifeCycle of a Node application: Following the Node application lifecycle, let's track the execution flow of the node application we just created, in the comments the "#" will outline the order of execution, please follow the # order: If we decided to remove the await from line 13, the entire flow would change, the way await works is that it transforms the current async function, into a promise. If we removed the await from line 13, line 14 would just not wait for the fs.writeFile() operation to be finished, and process immediately, and instead of the rest of the function being stacked in the event loop, only the fs.writeFile() operation would be stacked. With your install of Node.js, you will have the command line tool NPM, it is a package manager for Node, you can easily use it to download and install new modules, to import into your application. For each Node application, you want to have a package.json , this file will hold information about your project, Author details, version, how to execute it, custom NPM commands that you define. package.json also contains a list of all your dependencies, that way if someone wants to use your application you only have to distribute the source, and using NPM all the dependencies can be easily installed. To initialize the package.json file, run the following: Follow the steps the command line, set the Author details, and the main entry point as index.js, the rest you can leave it as default. To install a package using npm we do: npm install <name of the package> we also add the --save flag to save our new added dependencies to our package.json file. Now let's follow that and install our first npm module Express: Create a new file index.js, with the following source: We run our web app, with the following command: If you go to your localhost http://localhost:3000/ you will get the response we coded: A fun thing to consider, this time after we ran our application, it didn't automatically close! that's because of the following code: app.listen() attaches a listener into the poll phase of the event loop, that listens for HTTP calls in the declared port, and unless we manually detach it, is going to be always up, and on each iteration of the event loop, is going to stay alive listening for new HTTP calls to invoke our callback function on each one. Let's add a new route, to our API, that on each call uses fs.readFile() to read into the file system, get the contents of the helloworld.txt file, and send it back as the response of the request. Also lets change, change the original / endpoint to return JSON instead of a plain text Our API is looking pretty nice, but what would happen if you removed the helloworld.txt and then attempted to call the /readFile API endpoint? Oh no, we got an error, and since the error is thrown on line 14, we never get to send a response back, so the browser would be waiting until the requests timeouts The error in question would be: This error is caused by attempting to reading a nonexistent file with fs.readFile. In javascript when an Error is thrown, the execution of the current code block gets aborted, so to not lose control of the flow, we need to catch the error, error managing can be done elegantly in javascript using Try-Catch blocks, let's apply that, to our API endpoint. You should always catch your errors! We've been through a lot, within this short post, but by now you should have a well-rounded understanding of how a Node.js application works. Have fun implementing your own API's! Have fun and keep coding! Check out the documentation of the modules we used in this post: If you have any questions - or want feedback on your post - come join our community Discord . See you there!

Thumbnail Image of Tutorial Node.js Tutorial: How JavaScript on the backend can make your life easier.

Axios JS & Node.js a match made in heaven for API consumption

Axios is a popular JavaScript HTTP client, in this post we will go through some of the most common use cases with Node.Let's explore Axios JS and its usage cases with Node While building a Node.js application or microservice is pretty common you might end up needing to consume a third-party API, or just doing HTTP calls in general, and if that's the case Axios got you covered. Axios provides us an elegant and modern way to handle HTTP requests from Node using promises, to avoid old practices like having to rely on callbacks or deprecated modules such as Request . The pros of using Axios are: If you have ever used before Node.js you are good to go, this is an entry-level article, but here are some interesting articles that might enhance your grasp on the subject. Axios is available through most of the major Node package registry Using npm: Using yarn: To import Axios in our node app we do: Lets now use the library to do our first HTTP GET request, also to take advantage of Axios native promises using await we will abstract it using an async function: jsonPlaceholder is a fun little toy API server, which is great for this exercise, the resulting response object will be something like this, lets put our emphasis in the data property, which contains the response data sent by the server: The functions axios.post(url, data , options) , axios.put(url, data, options ) and axios.delete(url) Often you will have the need to set custom request headers, or setting a custom request timeout, we can achieve this through the Axios request config option To properly check for the Axios errors, to get the most amount of information, we need to check in the proper order: In the following code we showcase the proper way to filter through these cases, using a catch block: The first one we are going to showcase is, axios-retry , and as its name states is a retry plugin for Axios, let us retry failed requests, and set retry conditions and timeouts for these requests: To install axios-retry using NPM we do Last but not least, let's try out the Axios mock adapter To install axios-mock-adapter using NPM we do: Axios mock adapter allows us to easily create a mock server and attach it to our Axios instance, this is ideal for a controlled testing environment where you want to black-box your application We've been through a lot, within this short post, but with these tools, you should be able to do the basic HTTP operations with Axios. Have fun doing HTTP requests to an API, or crawling a website (while being responsible!) or just pinging your services through to know if they are alive. Have fun and keep coding! Check out the documentation of the modules we used in this post: If you have any questions - or want feedback on your post - come join our community Discord . See you there!

A journey to Asynchronous Programming: NodeJS FS.Promises API

This post is about how Node.Js performs tasks, how and why one would use async methods and why and when one should use sync methods. We will also see how to make use of the new FS. Promises API.Throughout this post, we will look at the many ways to write code the asynchronous way in Javascript and also look at: ✅ How asynchronous code fits in the event loop ✅ When you should resort to synchronous methods ✅ How to promisify FS methods and the FS.Promises API To make the best use of this post, one must already be: ✅ Sufficiently experienced in Javascript and NodeJS fundamentals (Variables, Functions, etc.) ✅ Had some level of exposure to the FileSystem module (Optional) Once you're done reading this post you will feel confident about asynchronous programming and will have learned something new but also know: ⭐- If you hang tight there's also some bonus content regarding some best practices when using certain FileSystem methods! Javascript achieves concurrency through what is known as an event loop. This event loop is what is responsible for executing the code you write, processing any event that fires, etc. This event loop is what makes it possible for Javascript to run on a single thread and handle asynchronous tasks, this just means that Javascript does one thing at a time. This might sound like a limitation but it is definitely something that helps. It allows you to work without worrying about concurrency issues and surprisingly the event loop is non-blocking! Of course, unless you as a developer purposely do something to block it. The event loop looks more or less like this: This loop runs as long as your program runs and hence called the event loop. To better understand asynchronous programming though, one must understand the following concepts: Let's take a look at the following code example and see how a typical execution flow looks like: Initially the synchronous tasks console.log() will be run in the order they were pushed into the call stack. Then the Promise thenables will be pushed into the Job Queue , while the setTimeout 's callback function is pushed into the Callback Queue . However, as the Job Queue is given a higher priority than the Callback Queue, the thenables are executed before the callback functions. What's a promise or a thenable, you ask? That's what we will look at in the next topic! As you previously saw in the setTimeout , a callback function is one of the ways that Javascript allows you to write asynchronous code. In Javascript, even Functions are objects and because of this a function can take another function as an argument/parameter and can also be returned by functions. Functions that take another function as an argument is called a Higher-Order Function. A function that is passed as an argument to another function is what is known as a callback. But quite often, having a whole lot of callbacks look like this: Taken from callbackhell , this shows how extremely complex and difficult it might get to maintain callbacks in a large codebase. Don't panic! That's why we have promises. A promise is an object that will produce some value in the future. When? We can't say, it depends. However, the value that is produced is one of two things. It is either a resolved value or a reason why it couldn't be resolved, which usually indicates something is wrong. A promise goes through a lifecycle that can be visualized like the following: Taken from a great resource on promises, MDN . But still, this didn't provide the cleanliness we wanted because it was quite easy to have a whole lot of thenables one after the other. This is why the async/await syntax was introduced, which looks like the following: Looks a whole lot better than what you saw in all the previous code examples! Before we jump into the exciting FS.promises API that I previously used, we must talk about the often unnoticed and unnecessarily avoided synchronous FileSystem methods. Remember how I mentioned previously that you can purposely block the event loop? A synchronous FS method does just that. Now you might have heard quite a lot of times about how you should avoid synchronous FS methods like the plague, but trust me because they block the event loop, there are times when you can use them. A synchronous function should be used over an asynchronous one when: A typical use case to satisfy both the above use cases can be expressed like this: DataStore is a means of storing products, and you'll easily notice the use of synchronous methods. The reason for this use is that it is completely acceptable to use a synchronous method like this as the constructor function is run only once per every creation of a new instance of DataStore . Also, it is essential to see if the file is available and create the file before it will be used by any other function. The asynchronous FileSystem methods in NodeJS, commonly use callbacks because, during the time they were made, Promises and async/await hadn't come out nor were they at experimental stages. The key advantage these methods provide over their synchronous siblings is the fact that you do not end up blocking the event loop when you use them. This allows us to write better more performant code. When code is run asynchronously, the CPU does not wait idly by until a task is completed but moves on to the next set of tasks. For example, let us take a task that takes 200ms to complete. If a synchronous method is used, CPU will be occupied for the entire 200ms but if you use around 190ms of that time is freed up and can now be used by the CPU to perform any other tasks that are available. A typical code example of asynchronous FileSystem methods are: As you can see, they are distinguished by the lack of Sync and the apparent usage of callback functions. When secret.txt has been completely read, the callback function will be executed and the secret data stored will be printed on the console. As humans, we're prone to making silly mistakes and when frustrated or when we experience a lot of stress, we tend to make unwise decisions, one such decision is mixing synchronous code with asynchronous code! Let's look at the following situation: Due to the nature of how NodeJS tackles operations, it is very much likely that the secret.txt file is deleted before we actually read it. Thankfully here though, we are catching the error so we will know that the file doesn't exist anymore. It is best to not mix asynchronous code with synchronous code, being consistent is mandatory in a modern codebase. Way back when FS.promises was introduced, developers had to resort to a few troublesome techniques. You might not need them anymore, but in the unlikely event you end up using an old version of NodeJS knowing how to achieve promisification will help greatly. One method is to use the promisify method from the NodeJS util module: But as you can see, this allows you to only turn one method into its promisified version at a time, so some developers often used an external module known as bluebird that allowed one to do this: Some developers still use bluebird as opposed to the natively implemented Promises API, due to performance reasons. As of NodeJS version 10.0, you can now use FS.promises a solution to all the problems that you'd face with thenables when you use Promises. You can neatly and directly use the FS.promises API and the clean async/await syntax. You do not have to use any other external dependencies. To use the FS.promises API you would do something like the following: It's much cleaner than the code you saw from the callback hell example, and the promises example as well! One must note however that async / await is simply syntax sugar, meaning it uses the Promise API under the hood. File streams are unfortunately one of the most unused or barely known concepts in the FileSystem module. To understand how a FileStream works, you must look at the Streams API in the NodeJS docs. One very common use case of FileStreams is when you must copy a large file, quite often whether you use an asynchronous method or synchronous method, this leads to a large amount of memory usage and a long time. This can be avoided by using the FileSystem methods fs.createReadStream and fs.createWriteStream . Phew! That was long, wasn't it? But now you must feel pretty confident regarding asynchronous programming, and you can now use the FS.promises API instead of the often used callback methods in the FileSystem module. Over time, we will see more changes in NodeJS, it is after all written in a language that is widely popular. What you should do now is check out the resources section and read some more about this or try out Fullstack Node.Js to further improve your confidence and get a lot of other different tools under your belt!

Thumbnail Image of Tutorial A journey to Asynchronous Programming: NodeJS FS.Promises API