Tutorials on Express.js

Learn about Express.js from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Integrating JWT Authentication with Go and chi jwtauth Middleware

Accessing an e-mail account anywhere in the world on any device requires authenticating yourself to prove the data associated with the account (e.g., e-mail address and inbox messages) actually belongs to you. Often, you must fill out a login form with credentials, such as an e-mail address and password, that uniquely identify your account. When you first create an account, you provide this information in a sign-up form. In some cases, the service sends either a confirmation e-mail or an SMS text message to ensure that you own the supplied e-mail address or phone number. Because it is highly likely that only you know the credentials to your account, authentication prevents unwanted actors from accessing your account and its data. Each time you log into your e-mail account and read your most recent unread messages, you, and like many other end users, don't think about how the service implements authentication to protect/secure your data and hide your activity history. You're busy, and you only want to spend a few minutes in your e-mail inbox before closing it out and resuming your day. For developers, the difficulty in implementing authentication comes from striking a balance between the user experience and the strength of the authentication. For example, a sign up form may prompt the user to enter a password that contains not only alphanumeric characters, but also must meet other requirements such as a minimum password length and containing punctuation marks. Asking for a stronger password decreases the likelihood of a malicious user correctly guessing it, but simultaneously, this password is increasingly more difficult for the user to remember. Keep in mind that poorly designed authentication can easily be bypassed and introduce more vulnerabilities into your application. In most cases, applications implement either session-based or token-based authentication to reliably verify a user's identity and persist authentication for subsequent page visits. Since Go is a popular choice for building server-side applications, Go's ecosystem offers many third-party packages for implementing these solutions into your applications. Below, I'm going to show you how to integrate JWT authentication within a Go and chi application with the chi jwtauth middleware. Let's imagine the following scenario. Within your e-mail inbox, you are asked to re-enter your e-mail address and password on every single action you take (e.g., opening an unread e-mail or switching to a different inbox tab) to continuously verify your identity. This implementation could be useful in the context of accidentally leaving your e-mail inbox open on a publicly-shared library computer when you have to step out to take a phone call. However, if the login credentials are sent over a non-HTTPS connection, then the login credentials are susceptible to a MITM (man-in-the-middle) attack and can be hijacked. Plus, it would result in a frustrating user experience and immediately drive users away to a different service. Traditionally, to persist authentication, an application establishes a session and saves an http-only cookie with this session's ID inside the user's browser. Usually, this session ID maps to the user's ID, which can then be used to fetch the user's information. If you have ever built an Express.js application with the authentication middleware library Passport and session middleware library express-session , then you are probably familiar with the connect.sid http-only cookie, which is a session ID cookie, and managing sessions with Redis . In Redis, the connect.sid cookie's corresponding key is the session ID (the substring proceeding s%3A and preceding the first dot of this cookie's value) prefixed with sess: , and its value contains information about the cookie and user authenticated by Passport. When a user sends an authentication request (via the standard username/password combination or an OAuth 2.0 provider such as Google / Facebook / Twitter ), Passport determines which of these authentication mechanisms ("strategies") to use to process the request. For example, if the user chooses to authenticate via Google, then Passport uses GoogleStrategy , like so: The done function supplies Passport with the authenticated user. To avoid exposing credentials in subsequent requests, the browser uses a unique cookie that identifies the user's session. Passport serializes the least amount of information that's required to map the user to the session. Often, the user's ID gets serialized. By serializing as little information as needed, this means there is less data stored in the user's session. Upon receiving a subsequent requests, Passport deserializes the user's ID (serialized via serializeUser ) into an object with the user's information, which allows it to be up to date with any recent changes. Whenever an Express.js route needs to access this information, it can via the req.user object. With session-based authentication, authentication is stateful because the server persists/tracks the session (either within the server's internal memory or an in-memory data store like Redis or Memcached). With token-based authentication, authentication is stateless . With tokens, nothing needs to be persisted on the server-side, and the server doesn't need to fetch the user's information on every subsequent request. One of the most popular token standards is JSON Web Token (JWT). JWTs are used for authorization, information exchange and verifying the user's authentication. Instead of creating a session, the server creates a cryptographically-signed JWT and saves an http-only cookie with this token inside of the user's browser, which allows the JWT to automatically be sent on every subsequent request. If the JWT is saved in plain memory, then it should be sent in the Authorization header using the Bearer authentication scheme ( Bearer <token> ). A JWT consists of three strings encoded in Base64URL : These strings are concatenated together (separated by dots) to form a token. Example : The following is a simple JWT, which follows the format <BASE64_URL_HEADER>.<BASE64_URL_PAYLOAD>.<BASE64_URL_SIGNATURE> : eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c Constructing JWTs is relatively straight-forward. Decoding a JWT is also relatively straight-forward. Try out different signing algorithms, adding scope: [ "admin", "user" ] to the payload or modifying the secret in the JWT debugger . Note : Since a JWT is digitally signed, its content is protected from tampering. Tampering invalidates the token. Having sensitive data in the payload or header requires the JWT to be encrypted. It is recommended to first sign the JWT and then encrypt it . Referring back to the previous Express.js and Passport example, we can remove both Redis, the session middleware and the serialization/deserialization logic (relies on sessions), and then add the Passport JWT strategy passport-jwt for authenticating with a JWT. We no longer have to devote any backend infrastructure/resources to managing sessions with the introduction of token-based authentication via JWT. This will significantly reduce the number of times we need to query the database for the user's information. Like any other authentication method, token-based authentication comes with its own set of unique problems. For example, when we store the token in a cookie, this cookie is sent on every request (bound to a single domain), even those that don't require the user to be authenticated. Although this cookie is stored with the HttpOnly attribute (inaccessible to JavaScript), it is still susceptible to a Cross-Site Request Forgery attack, which happens when a third-party website successfully sends a request to a service without the user's explicit consent due to cookies (those set by the service's server) being sent on all requests to that service's server. If you're running an online banking service, and one of your users is authenticated and visits a malicious website that sends the request POST https://examplebankingservice.com/transfer when they click on a harmless-looking button, then money will be transferred from the user's bank account since their valid token is sent with the request. To mitigate this vulnerability, set the token's cookie SameSite attribute ( sameSite: "lax" or sameSite: "strict" depending on your needs) and include a CSRF token specific to each user of your service in case of malicious subdomains. It should be set as a hidden form field in forms that send requests to protected endpoints upon being submitted, and your service should regenerate a new CSRF token for the user upon them logging in. This way, malicious websites cannot send requests to protected endpoints unless they also know that specific user's CSRF token. Note : By default, the latest versions of some modern browsers already treat cookies without the SameSite attribute as if this attribute was set to Lax . Setting the SameSite attribute of a cookie to Strict restricts a cookie to its originating website only and prevents cookies from being sent on any cross-site request or iframe. Setting the SameSite attribute of a cookie to Lax causes the same behavior as Strict , but relaxes the cross-site request restriction to target only POST requests. The alternative is to store the token in localStorage , but this is not recommended because localStorage is accessible by any JavaScript code running on your website. Therefore, it is susceptible to a Cross-Site Scripting attack, which allows unwanted JavaScript code to be injected into and executed within your website. Common attack vectors for XSS are passing unsanitized user input directly to eval and appending unsanitized HTML (contains a <script /> tag with malicious code). Unlike sessions, an individual JWT cannot be forcefully invalidated when security concerns arise. Rather, there are approaches that can be taken to invalidate a JWT, such as... Fortunately, supporting JWT authentication in a Go and chi application is made easy with the third-party jwtauth library. Similar to the Express.js and Passport example, jwtauth validates and extracts payload information from a JWT for route handlers via several pre-defined middleware handlers ( jwtauth.Verifier and jwtauth.Authenticator ) and context respectively. To demonstrate this, let's walkthrough a simple login flow: Inside of a Go file, scaffold out the routes using the chi router library. This application involves only four routes: ( main.go ) Let's think about these routes in-depth. When the user logs in, the navigation bar should no longer display a "Log In" link. Instead, the navigation bar should display the user's username as a link, which opens the user's "Profile" page when clicked, and a "Log Out" link. This means that all pages that display the navigation bar should be aware of whether or not the user is logged in, as well as the identity of the user. Let's group the GET / , GET /login and GET /profile endpoints together via the r.Group method, and then execute the middleware handler jwtauth.Verifier to seek, verify and validate the user's JWT token. This handler accepts a pointer to a JWTAuth struct, which is returned by the jwtauth.New method. Essentially, this method creates a factory for generating JWT tokens using a specified algorithm and secret (an additional key must be provided for RSA and ECDSA algorithms). The POST /login and POST /logout endpoints can be grouped together to establish them as routes that don't require a JWT token. Behind-the-scenes, jwtauth.Verifier automatically searches for a JWT token in an incoming request in the following order: Once the JWT token is verified, it is decoded and then set on the request context. This allows subsequent handlers to have direct access to the payload claims and the token itself. When the user submits a login form, their credentials are sent to the endpoint POST /login . It's corresponding route handler checks if the credentials are valid, and when they are, the handler generates a token that encodes parts of the user's information (i.e., their username) as payload claims via a MakeToken function and stores the token cookie within the user's browser, all before redirecting the user to their "Profile" page. Note : Underscores indicate placeholders for unused variables. For this simplicity's sake, we're going to accept any username and password combination as long as each is at least one character long. When the user logs out, this token cookie needs to be deleted. To delete this cookie, set its MaxAge to a value less than zero. After this cookie is deleted, redirect the user back to the homepage. Although the GET / , GET /login and GET /profile endpoints rely on the jwtauth.Verifier middleware, they each need to be grouped individually (not together) to add custom middleware to account for these scenarios: When rendering the webpages via data-driven templates , we need to extract the user's username from the JWT token's payload, which we encoded via the MakeToken function, to display it within the navigation bar. The payload's claims can be accessed from the request's context. Once the templates are parsed and prepared via the template.ParseFiles and template.Must methods respectively, apply these templates ( tmpl ) to the page data data via the ExecuteTemplate method. The second argument of ExecuteTemplate method names the root template that contains the other parsed templates (partials). The output is written to the ResponseWriter , which will render the result as a webpage. Note : If building a service, such as a RESTful API, that requires a 401 response to be returned on protected routes that can only be accessed by an authenticated user, use the jwtauth.Authenticator middleware. Finally, spin up the server by running the go run . command. Within a browser, visit the application at http://localhost:8080 and open the browser's developer console. Observe how the browser sets and unsets the cookie when you log in and out of the application, and watch as the user's username gets extracted from the JWT and displayed in the navigation bar. If you find yourself stuck at any point while working through this tutorial, then feel free to visit the main branch of this GitHub repository here for the code. Explore and try out other token standards for authentication. If you want to learn more advanced back-end web development techniques with Go, then check out the Reliable Webservers with Go course by Nat Welch, a site reliability engineer at Time by Ping (and formerly a site reliability engineer at Google), and Steve McCarthy, a senior software engineer at Etsy.

Thumbnail Image of Tutorial Integrating JWT Authentication with Go and chi jwtauth Middleware

Visualizing Geographic SQL Data on Google Maps

Analytics dashboards display different data visualizations to represent and convey data in ways that allow users to quickly digest and analyze information. Most multivariate datasets consumed by dashboards include a spatial field/s, such as an observation's set of coordinates (latitude and longitude). Plotting this data on a map visualization contextualizes the data within a real-world setting and sheds light on spatial patterns that would otherwise be hidden in the data. Particularly, seeing the distribution of your data across an area connects it to geographical features and area-specific data (i.e., neighborhood/community demographics) available from open data portals. The earliest example of this is the 1854 cholera visualization by John Snow , who marked cholera cases on a map of London's Soho and uncovered the source of the cholera outbreak by noticing a cluster of cases around a water pump. This discovery helped to correctly identify cholera as a waterborne disease and not as an airbourne disease. Ultimately, it changed how we think about disease transmission and the impact our surroundings and environment have on our health. If your data consists of spatial field/s, then you too can apply the simple technique of plotting markers on a map to extrapolate valuable insight from your own data. Map visualizations are eye-catching and take on many forms: heatmaps, choropleth maps, flow maps, spider maps, etc. Although colorful and aesthetically pleasing, these visualizations provide intuitive controls for users to navigate through their data with little effort. To create a map visualization, many popular libraries (e.g., Google Maps API and deck.gl ) support drawing shapes, adding markers and overlaying geospatial visualization layers on top of a set of base map tiles. Each layer generates a pre-defined visualization based on a collection of data. It associates each data point with certain attributes (color, size, etc.) and renders them on to a map. By pairing a map visualization library with React.js, developers can build dynamic map visualizations and embed them into an analytics dashboard. If the visualizations' data comes from a PostgreSQL database, then we can make use of PostGIS geospatial functions to help answer interesting questions related to spatial relationships, such as which data points lie within a 1 km. radius of a specific set of coordinates. Below, I'm going to show you how to visualize geographic data queried from a PostgreSQL database on Google Maps. This tutorial will involve React.js and the @react-google-maps/api library, which contains React.js bindings and hooks to the Google Maps API, to create a map visualization that shows the location of data points. To get started, clone the following two repositories: The first repository contains a Create React App with TypeScript client-side application that displays a query builder for composing and sending queries and a table for presenting the fetched data. The second repository contains a multi-container Docker application that consists of an Express.js API, a PostgreSQL database and pgAdmin. The Express.js API connects to the PostgreSQL database, which contains a single table named cp_squirrels seeded with 2018 Central Park Squirrel Census data from the NYC Open Data portal. Each record in this dataset represents a sighting of an eastern gray squirrel in New York City's Central Park in the year 2018. When a request is sent to the API endpoint POST /api/records , the API processes the query attached as the body of the request and constructs a SQL statement from it. The pg client executes the SQL statement against the PostgreSQL database, and the API sends back the result in the response. Once it receives this response, the client renders the data to the table. To run the client-side application, execute the following commands within the root of the project's directory: Inside of your browser, visit this application at http://localhost:3000/ . Before running the server-side application, add a .env.development file with the following environment variables within the root of the project's directory: ( .env.development ) To run the server-side application, execute the following commands within the root of the project's directory: Currently, the client-side application only displays the data within a table. For it to display the data within a map visualization, we will need to install several NPM packages: The Google Maps API requires an API key, which tracks your map usage. It provides a free quota of Google Map queries, but once you exceed the quota, you will be billed for the excessive usage. Without a valid API key, Google Maps fails to load: The process of generating an API key involves a good number of steps, but it should be straight-forward. First, navigate to your Google Cloud dashboard and create a new project. Let's name the project "react-google-maps-sql-viz." Once the project is created, select this project as the current project in the notifications pop-up. This reloads the dashboard with this project now selected as the current project. Now click on the "+ Enable APIs and Services" button. Within the API library page, click on the "Maps JavaScript API" option. Enable the Maps JavaScript API. Once enabled, the dashboard redirects you to the metrics page of the Maps JavaScript API. Click the "Credentials" option in the left sidebar. Within the "Credentials" page, click the "Credentials in APIs & Services" link. Because this is a new project, there should be zero credentials listed. Click the "+ Create Credentials" button, and within the pop-up dropdown, click the "API key" option. This will generate an API key with default settings. Copy the API key to your clipboard and close the modal. Click on the pencil icon to rename the API key and restrict it to our client-side application. Rename API key to "Google Maps API Key - Development." This key will be reserved for local development and usage metrics recorded during local development will be tied to this single key. Under the "Application Restrictions" section, select the "HTTP referrers (web sites)" option. Below, the "Website restrictions" section appears. Click the "Add an Item" button and enter the referrer " http://localhost:3000/* " as a new item. This ensures our API key can only be used by applications running on http://localhost:3000/ . This key will be invalid for other applications. Finally, under the "API Restrictions" -> "Restrict Key" section, select the "Maps JavaScript API" option in the <select /> element for this key to only allow access to the Google Maps API. All other APIs are off limits. After you finish making these changes, press the "Save" button. Note: Press the "Regenerate Key" button if the API key is compromised or accidentally leaked in a public repository, etc. The dashboard redirects you back to the "API & Services" page, which now displays the updated API key information. Also, don't forget to enable billing! Otherwise, the map tiles fail to load: When you create a billing account and link the project to the billing account, you must provide a valid credit/debit card. When running the client-side application in different environments, each environment supplies a different set of environment variables to the application. For example, if you decide to deploy this client-side application live to production, then you would provide a different API key than the one used for local development. The API key used for local development comes with its own set of restrictions, such as only being valid for applications running on http://localhost:3000/ , and collects metrics specific to local development. For local development, let's create a .env file at the root of the client-side application's project directory. For environment variables to be accessible by Create React App, they must be prefixed with REACT_APP . Therefore, let's name the API key's environment variable REACT_APP_GOOGLE_MAPS_API_KEY , and set it to the API key copied to the clipboard. Let's start off by adding a map to our client-side application. First, import the following components and hooks from the @react-google-maps/api library: ( src/App.tsx ) Let's destructure out the API key's environment variable from process.env : ( src/App.tsx ) Establish where the map will center. Because our dataset focuses on squirrels within New York City's Central Park, let's center the map at Central Park. We will be adding a marker labeled "Central Park" at this location. ( src/App.tsx ) Within the <App /> functional component, let's declare a state variable that will hold an instance of our map in-memory. For now, it will be unused. ( src/App.tsx ) Call the useJsApiLoader hook with the API key and an ID that's set as an attribute of the Google Maps API <script /> tag. Once the API has loaded, isLoaded will be set to true , and we can then render the <GoogleMap /> component. ( src/App.tsx ) Currently, TypeScript doesn't know what the type of our environment variable is. TypeScript expects the googleMapsApiKey option to be set to a string, but it has no idea if the REACT_APP_GOOGLE_MAPS_API_KEY environment variable is a string or not. Under the NodeJS namespace, define the type of this environment variable as a string within the ProcessEnv interface. ( src/react-app-env.d.ts ) Beneath the table, render the map. Only render the map once the Google Maps API has finished loading. Pass the following props to the <GoogleMap /> component: Here, we set the center of the map to Central Park and set the zoom level to 14. Within the map, add a marker at Central Park, which will physically mark the center of the map. ( src/App.tsx ) The onLoad function will set the map instance in state while the onUnmount function will wipe the map instance from state. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Within your browser, visit the application at http://localhost:3000/ . When the application loads, a map is rendered below the empty table. At the center of this map is marker, and when you hover over this marker, the mouseover text shown will be "Central Park." Suppose we send a query requesting for all squirrel observations that involved a squirrel with gray colored fur. When we display these observations as rows within a table, answering questions like "Which section of Central Park had the most observations of squirrels with gray colored fur?" becomes difficult. However, if we populate the map with markers of these observations, then answering this question becomes easy because we will be able to see where the markers are located and identify clusters of markers. First, let's import the <InfoWindow /> component from the @react-google-maps/api library. Each <Marker /> component will have an InfoWindow, which displays content in a pop-up window (in this case, it acts as a marker's tooltip), and it will only be shown only when the user clicks on a marker. ( src/App.tsx ) Since each observation ("record") will be rendered as a marker within the map, let's add a Record interface that defines the shape of the data representing these observations mapped to <Marker /> components. ( src/App.tsx ) We only want one InfoWindow to be opened at any given time. Therefore, we will need a state variable to store an ID of the currently opened InfoWindow. ( src/App.tsx ) Map each observation to a <Marker /> component. Each <Marker /> component has a corresponding <InfoWindow /> component. When a marker is clicked on by the user, the marker's corresponding InfoWindow appears with information about the color of the squirrel's fur for that single observation. Since every observation has a unique ID, only one InfoWindow will be shown at any given time. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Within the query builder, add a new rule by clicking the "+Rule" button. Set this rule's field to "Primary Fur Color" and enter "Gray" into the value editor. Keep the operator as the default "=" sign. When this query is sent to the Express.js API's POST /api/records endpoint, it produces the condition primary_fur_color = 'Gray' for the SQL statement's WHERE clause and will fetch all of the observations involving squirrels with gray-colored fur. Press the "Send Query" button. Due to the high number of records returned by the API in the response, the browser may freeze temporarily to render all the rows in the table and markers in the map. Once the browser finishes rendering these items, notice how there are many markers on the map and no discernable spatial patterns in the observations. Yike! For large datasets, rendering a marker for each individual observation causes massive performance issues. To avoid these issues, let's make several adjustments: Define a limit on the number of rows that can be added to the table. ( src/App.tsx ) Add a state variable to track the number of rows displayed in the table. Initialize it to five rows. ( src/App.tsx ) Anytime new data is fetched from the API as a result of a new query, reset the number of rows displayed in the table back to five rows. ( src/App.tsx ) Using the slice method, we can limit the number of rows displayed in the table. It is increased by five each time the user clicks the "Load 5 More Records" button. This button disappears once all of the rows are displayed. ( src/App.tsx ) To render a heatmap layer, import the <HeatmapLayer /> component and tell the Google Maps API to load the visualization library . For the libraries option to be set to LIBRARIES , TypeScript must be reassured that LIBRARIES will only contain specific library names. Therefore, import the Libraries type from @react-google-maps/api/dist/utils/make-load-script-url and annotate LIBRARIES with this type. ( src/App.tsx ) ( src/App.tsx ) ( src/App.tsx ) ( src/App.tsx ) Pass a list of the observations' coordinate points to the <HeatmapLayer /> component's data prop. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Save the changes and re-enter the same query into the query builder. Now the table displays information only the first five observations of the fetched data, and the heatmap visualization clearly distinguishes the areas with no observations and the areas with many observations. Click here for the final version of this project. Click here for the final version of this project styled with Tailwind CSS . Try visualizing the data with other Google Maps layers.

Thumbnail Image of Tutorial Visualizing Geographic SQL Data on Google Maps

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Deploying a Node.js and PostgreSQL Application to Heroku

Serving a web application to a global audience requires deploying, hosting and scaling it on reliable cloud infrastructure. Heroku is a cloud platform as a service (PaaS) that supports many server-side languages (e.g., Node.js, Go, Ruby and Python), monitors application status in a beautiful, customizable dashboard and maintaining an add-ons ecosystem for integrating tools/services such as databases, schedulers, search engines, document/image/video processors, etc. Although it is built on AWS, Heroku is simpler to use compared to AWS. Heroku automatically provisions resources and configures low-level infrastructure so developers can focus exclusively on their application without the additional headache of manually setting up each piece of hardware and installing an operating system, runtime environment, etc. When deploying to Heroku, Heroku's build system packages the application's source code and dependencies together with a language runtime using a buildpack and slug compiler to generate a slug , which is a highly optimized and compressed version of your application. Heroku loads the slug onto a lightweight container called a dyno . Depending on your application's resource demands, it can be scaled horizontally across multiple concurrent dynos. These dynos run on a shared host, but the dynos responsible for running your application are isolated from dynos running other applications. Initially, your application will run on a single web dyno, which serves your application to the world. If a single web dyno cannot sufficiently handle incoming traffic, then you can always add more web dynos. For requests exceeding 500ms to complete, such as uploading media content, consider delegating this expensive work as a background job to a worker dyno. Worker dynos process these jobs from a job queue and run asynchronously to web dynos to free up the resources of those web dynos. Below, I'm going to show you how to deploy a Node.js and PostgreSQL application to Heroku. First, let's download the Node.js application by cloning the project from its GitHub repository: Let's walkthrough the architecture of our simple Node.js application. It is a multi-container Docker application that consists of three services: an Express.js server, a PostgreSQL database and pgAdmin. As a multi-container Docker application orchestrated by Docker Compose , the PostgreSQL database and pgAdmin containers are spun up from the postgres and dpage/pgadmin4 images respectively. These images do not need any additional modifications. ( docker-compose.yml ) The Express.js server, which resides in the api subdirectory, connects to the PostgreSQL database via the pg PostgreSQL client. The module api/lib/db.js defines a Database class that establishes a reusable pool of clients upon instantiation for efficient memory consumption. The connection string URI follows the format postgres://[username]:[password]@[host]:[port]/[db_name] , and it is accessed from the environment variable DATABASE_URL . Anytime a controller function (the callback argument of the methods app.get , app.post , etc.) calls the query method, the server connects to the PostgreSQL database via an available client from the pool. Then, the server queries the database, directly passing the arguments of the query method to the client.query method. Once the database sends the requested data back to the server, the client is released back to the pool, available for the next request to use. Additionally, there's a getAllTables method for retrieving low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . ( api/lib/db.js ) The table cp_squirrels is seeded with records from the 2018 Central Park Squirrel Census dataset downloaded from the NYC Open Data portal. The dataset, downloaded as a CSV file, contains the fields obs_date (observation date) and lat_lng (coordinates of observation) with values that are not compatible with the PostgreSQL data types DATE and POINT respectively. Instead of directly copying the contents of the CSV file to the cp_squirrels table, copy from the output of a GNU awk ("gawk") script. This script... ( db/create.sql ) Upon the initialization of the PostgreSQL database container, this SQL file is ran by adding it to the docker-entrypoint-initdb.d directory. ( db/Dockerfile ) This server exposes a RESTful API with two endpoints: GET /tables and POST /api/records . The GET /tables endpoint simply calls the db.getAllTables method, and the POST /api/records endpoint retrieves data from the PostgreSQL database based on a query object sent within the incoming request. To bypass CORS restrictions for clients hosted on a different domain (or running on a different port on the same machine) sending requests to this server, all responses must have the Access-Control-Allow-Origin header set to the allowable domain ( process.env.CLIENT_APP_URL ) and the Access-Control-Allow-Headers header set to Origin, X-Requested-With, Content-Type, Accept . ( api/index.js ) Notice that the Express.js server requires three environment variables: CLIENT_APP_URL , PORT and DATABASE_URL . These environment variables must be added to Heroku, which we will do later on in this post. The Dockerfile for the Express.js server instructs how to build the server's Docker image based on its needs. It automates the process of setting up and running the server. Since the server must run within a Node.js environment and relies on several third-party dependencies, the image must be built upon the node base image and install the project's dependencies before running the server via the npm start command. ( api/Dockerfile ) However, because the filesystem of a Heroku dyno is ephemeral , volume mounting is not supported. Therefore, we must create a new file named Dockerfile-heroku that is dedicated only to the deployment of the application to Heroku and not reliant on a volume. ( api/Dockerfile-heroku ) Unfortunately, you cannot deploy a multi-container Docker application via Docker Compose to Heroku. Therefore, we must deploy the Express.js server to a web dyno with Docker and separately provision a PostgreSQL database via Heroku Postgres add-on . To deploy an application with Docker, you must either: For this tutorial, we will deploy the Express.js server to Heroku by building a Docker image with heroku.yml and deploying this image to Heroku. Let's create a heroku.yml manifest file inside of the api subdirectory. Since the Express.js server will be deployed to a web dyno, we must specify the Docker image to build for the application's web process, which the web dyno belongs to: ( api/heroku.yml ) Because our api/Dockerfile already has a CMD instruction, which specifies the command to run within the container, we don't need to add a run section. Let's add a setup section, which defines the environment's add-ons and configuration variables during the provisioning stage. Within this section, add the Heroku PostgreSQL add-on. Choose the free " Hobby Dev " plan and give it a unique name DATABASE . This unique name is optional, and it is used to distinguish it from other Heroku PostgreSQL add-ons. Fortunately, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable, which contains the database connection information for this newly provisioned database, will be made available to our application. Check if your machine already has the Heroku CLI installed. If not yet installed, then install the Heroku CLI. For MacOSX, it can be installed via Homebrew: For other operating systems, follow the instructions here . After installation, For the setup section of the heroku.yml manifest file to be recognized and used for creating a Heroku application, switch to the beta update channel and install the heroku-manifest plugin: Without this step, the PostgreSQL database add-on will not be provisioned from the heroku.yml manifest file. You would have to manually provision the database via the Heroku dashboard or heroku addons:create command. Once installed, close out the terminal window and open a new one for the changes to take effect. Note : To switch back to the stable update stream and uninstall this plugin: Now, authenticate yourself by running the follow command: Note : If you want to remain within the terminal, as in entering your credentials directly within the terminal, then add the -i option after the command. This command prompts you to press any key to open a login page within a web browser. Enter your credentials within the login form. Once authenticated, Heroku CLI will automatically log you in. Within the api subdirectory, create a Heroku application with the --manifest flag: This command automatically sets the stack of the application to container and sets the remote repository of the api subdirectory to heroku . When you visit the Heroku dashboard in a web browser, this newly created application is listed under your "Personal" applications: Set the configuration variable CLIENT_APP_URL to a domain that should be allowed to send requests to the Express.js server. Note : The PORT environment variable is automatically exposed by the web dyno for the application to bind to. As previously mentioned, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable will automatically be exposed. Under the application's "Settings" tab in the Heroku Dashboard, you can find all configuration variables set for your application under the "Config Vars" section. Create a .gitignore file within the api subdirectory. ( api/.gitignore ) Commit all the files within the api subdirectory: Push the application to the remote Heroku repository. The application will be built and deployed to the web dyno. Ensure that the application has successfully deployed by checking the logs of this web dyno: If you visit https://<application-name>.herokuapp.com/tables in your browser, then a successful response is returned and printed to the browser. In case the PostgreSQL database is not provisioned, manually provision it using the following command: Then, restart the dynos for the DATABASE_URL environment variable to be available to the Express.js server at runtime. Deploy your own containerized applications to Heroku!

Thumbnail Image of Tutorial Deploying a Node.js and PostgreSQL Application to Heroku

React Query Builder - The Ultimate Querying Interface

From businesses looking to optimize their operations, data influences the decisions being made. For scientists looking to validate their hypotheses, data influences the conclusions being arrived at. Regardless, the sheer amount of data collected and harnessed from various sources presents the challenge of identifying rising trends and interesting patterns hidden within this data. If the data is stored within an SQL database, such as PostgreSQL , querying data with the expressive power of the SQL language unlocks the data's underlying value. Creating interfaces to fully leverage the constructs of SQL in analytics dashboards can be difficult if done from scratch. With a library like React Query Builder , which contains a query builder component for fetching and exploring rows of data with the exact same query and filter rules provided by the SQL language, we can develop flexible, customizable interfaces for users to easily access data from their databases. Although there are open source, administrative tools like pgAdmin , these tools cannot be integrated directly into a custom analytics dashboard (unless embedded within an iframe). Additionally, you would need to manage more user credentials and permissions, and these tools may be considered too overwhelming or technical for users who aren't concerned with advanced features, such as a procedural language debugger, and intricate back-end and database configurations. By default, the <QueryBuilder /> component from the React Query Builder library contains a minimal set of controls only for querying data with pre-defined rules. Once the requested data is queried, this data can then be summarized by rendering it within a data visualization, such as a table or a line graph. Below, I'm going to show you how to integrate the React Query Builder library into your application to gain insights into your data. To get started, scaffold a basic React project with the Create React App and TypeScript boilerplate template. Inside of this project's root directory, install the react-querybuilder dependency: If you happen to run into the following TypeScript error... Could not find a declaration file for module 'react'. '<project-name>/node_modules/react/index.js' implicitly has an 'any' type. ... then add the "noImplicitAny": false configuration under compilerOptions inside of tsconfig.json to resolve it. React Query Builder composes a query from the rules or groups of rules set within the query builder interface. This query, in JSON form, should be sent to a server-side application that's connected to a PostgreSQL database to properly format the query into a SQL statement and execute the statement to fetch records of data from the database. For this tutorial, we will send this query to an Express.js API running within a multi-container Docker application. This application also runs a PostgreSQL database and the pgAdmin in separate containers. The API connects to the PostgreSQL database and defines a POST route for processing the query. With Docker Compose, you can execute a single command to spin up all of these services at once on a single host machine! To run the entire back-end, you don't need to manually install PostgreSQL or pgAdmin on your machine; you only need Docker installed on your machine. Plus, if you decide to run other services, such as NGINX or Redis , then you can add them within the docker-compose.yml configuration file. Clone the following repository: Inside the root this cloned project, add a .env.development file with the following environment variables: To run the server-side application, execute the following command: This command starts up the server-side application. When you re-build and restart the application with this same command, it will do so from scratch with the latest images. It's up to you if you want to leverage caching to expedite the build and start up processes. Nevertheless, let's break down what this command does: For each docker-compose command, pass a set of environment variables via the --env-file option. This approach in setting environment variables allows these variables to be accessed within the docker-compose.yml file and easily works in a CI/CD pipeline. Since the .env.<environment> files are typically not pushed to the remote repository (i.e., ignored by Git), especially for public-facing projects, when deploying this project to a cloud platform, the environment variables set within the platform's dashboard function the same way as those set by the --env-file option. The PostgreSQL database contains only one table named cp_squirrels that is seeded with 2018 Central Park Squirrel Census data downloaded from the NYC Open Data portal. Each record represents a sighting of an eastern gray squirrel in New York City's Central Park in the year 2018. Let's verify that pgAdmin is running by visiting localhost:5050 in the browser. Here, you will be presented a log-in page. Enter your credentials ( NYCSC_PGADMIN_EMAIL and NYCSC_PGADMIN_PASSWORD ) into the log-in form. On the pgAdmin welcome page, right-click on "Servers" in the "Browser" tree control (in the left pane) and in the dropdown, click Create > Server . Under "General," set the server name to nyc_squirrels . Under "Connection," set the host name to nycsc-pg-db , the container name set for our nycsc-pg-db . It is where our PostgreSQL database is virtually hosted at on our local machine. Set the username and password to the values of NYCSC_PGADMIN_EMAIL and NYCSC_PGADMIN_PASSWORD respectively. Save those server configurations. Wait for pgAdmin to connect to the PostgreSQL database. Once connected, it should appear under the "Browser" tree control. Right-click on the database ( nyc_squirrels ) in the "Browser" tree control and in the dropdown, click the Query Tool option. Inside of the query editor, type a simple SQL statement to verify that the database has been properly seeded: This statement should return the first ten records of the cp_squirrels table. Let's verify that the Express.js API is running by visiting localhost:<NYCSC_API_PORT>/tables in the browser. The browser should display low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . Great! With the server-side working as intended, let's turn our attention back to integrating the React Query Builder component into the client-side application. Inside of our Create React App project's src/App.tsx file, import the <QueryBuilder /> component from the React Query Builder library. At a minimum, this component accepts two props: This is what the query builder looks like without any styling and with only these two props passed to the <QueryBuilder /> component: This probably doesn't make much sense, so let's immediately jump into a basic example to better understand the capabilities of this component. Let's make the following adjustments to the src/App.tsx file to create a very basic query builder: Open the application within your browser. The following three element component is shown in the browser: The first element is the combinator selector , which is a <select /> element that contains two options: AND and OR . These options correspond to the AND and OR operators of a SQL statement's WHERE clause. The second element is the add rule action , which is a <button /> element ( +Rule ) that when pressed will add a rule. If you press this button, then a new rule is rendered beneath the initial query builder component: A rule consists of a field , an operator and a value editor , and it corresponds to a condition specified in a SQL statement's WHERE clause. The field <select /> element lists all of the fields passed into the fields prop. Notice that the label of the field is shown in this element. The operator <select /> element lists all of the possible comparison/logical operators that can be used in a condition. Lastly, the value editor <input /> element contains what the field will be compared to. For example, if we type -73.9561344937861 into the <input /> field, then the condition that will be specified in the WHERE clause is X = -73.9561344937861 . Basically, this will fetch all squirrel sightings located at the longitudinal value of -73.9561344937861 . With only one rule, the combinator selector is not applicable. However, if we press the add rule action button again, another rule will be rendered, and the combinator selector will become applicable. With two rules, two conditions are specified and combined with the AND operator: X = -73.9561344937861 AND Y = 40.7940823884086 . The third element is the add group action , which is a <button /> element ( +Group ) that when pressed will add an empty group of rules. If you press this button, then a new group is rendered beneath whatever has already been rendered in the query builder component: Currently, there are no rules within the newly created group. When we add two new rules to this group by pressing its add rule action button twice and change the value of its combinator selector to OR , like so: The two rules within this new group are combined together similar to placing parentheses around certain conditions in a WHERE clause to give a higher priority to them during evaluation. For the above case, the overall condition specified to the WHERE clause would be X = -73.9561344937861 AND Y = 40.7940823884086 AND (X = -73.9688574691102 OR Y = 40.7837825208444) . A total of eight fields are defined. Essentially, they are based on the columns of the cp_squirrels table. For each field, the name property corresponds to the actual column name, and the label property corresponds a more presentable column title that is shown in the field <select /> element of each rule. If you look into developer tools console, then you will see many query objects logged to the console: Every single action performed on the query builder that changes the query will invoke the logQuery function, which prints the query to the console. If we import the formatQuery function from the react-querybuilder library and call it inside of logQuery with the query, then we can format the query in many different ways. For now, let's format the query to a SQL WHERE clause: ( src/App.tsx ) If we modify any of the controls' values, then both the query (in its raw object form) and its formatted string (as a condition of a WHERE clause) are printed to the console: With the fundamentals out of the way, let's focus on sending the query to our Express.js API to fetch data from our PostgreSQL database. Inside of src/App.tsx , let's add a "Send Query" button below the <QueryBuilder /> component: Note : The underscore prefix of the _evt argument indicates an unused argument. When the user clicks this button, the client will send the most recent query to the /api/records endpoint of the Express.js API. This endpoint takes the query, formats it into a SQL statement, executes this SQL statement and responds back with the result table. We will need to store the query inside a state variable to allow other functions, such as , within the <App /> component to access the query. This changes our uncontrolled component to a controlled component . ( src/App.tsx ) Anytime onQueryChange is invoked, the setUpdateQuery method will update the value of the updateQuery variable, which must adhere to the type RuleGroupType . Update the sendQuery function to send updateQuery to the /api/records endpoint and log the data in the response. ( src/App.tsx ) Inside of the query builder, if we want retrieve squirrel sightings found at the coordinates (40.7940823884086, -73.9561344937861), then create two rules: one for X (longitude) and one for Y (latitude). When we press the "Send Query" button, the result table (in JSON) is printed to the console: Only one squirrel sighting was observed at that particular set of coordinates. Let's display the result table in a simple table: ( src/App.tsx ) Press the "Send Query" button again. The result table (with only one record) should be displayed within a table. The best part is you can add other visualization components to display your fetched data. The sky's the limit! Click here for the final version of this project. Visit the React Query Builder to learn more about how you can customize it to your application's needs.

Thumbnail Image of Tutorial React Query Builder - The Ultimate Querying Interface