Tutorials on Api

Learn about Api from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Building an API using Firebase Functions for cheap

When I am working on personal projects, I often find the need to setup an API that serves up data to my app or webpages. I get frustrated when I end up spending too much time on hosting and environment issues. These days what I end up doing is hosting the API using Cloud Functions for Firebase . It hits all my requirements: The official name is Cloud Functions for Firebase. In this article, I am going to call it Firebase Functions. This is mostly to distinguish it from Google's other serverless functions-as-a-service: Cloud Functions. You can read more about the differences here . From that page: While I'm not going to write a mobile app in this article, I like to use Firebase Functions because: If all this isn't confusing enough, Google is rolling out a new version of Cloud Functions called 2nd generation which is in "Public Preview". So in order to move forward, let's identify our working assumptions: After all this is complete, you should have a single file called firebase.json and a directory called functions . The functions directory is where we'll write our API code. We'll take the emulator out for a spin. Congrats, you have Firebase Functions working on your local system! To exit the emulator, just type 'Ctrl-C' at your terminal window. This is all very exciting. Let's push our new "hello world" function into the cloud. From the command line type: The output should look similar, but not exactly to: And if we navigate to the Function URL we should get the 'Hello from Firebase!' message. Exciting! Do you see how easy it is to create Firebase Functions? We've done all the hard part of setting up our local environment and the Firebase project. Let's jump into creating an API using Express Install express: Next, edit the index.js file to look like: Then if you run You can load up your api locally. Note the URL link on the emulator is a little different -- it should have 'api' added at the end like: You should see our 'Hello World' message. Now for more fun, add '/testJSON' to the end of your link. You should see the browser return back JSON data that our API has sent: Now finally, let's deploy to the cloud: Note that when you try to deploy, Firebase is smart enough to detect that major changes to the URL structure have occurred. You'll need to verify that you did indeed make these changes and everything is ok. Since this is a trivial function, you can type Yes . Firebase will delete the old function we deployed earlier and create a new one. Once that completes, try to load the link and validate your API is now working! This article has walked you through the basics of using Firebase Functions to host your own API. The process of writing and creating a full featured API is beyond the scope of this article. There are many resources out there to help with this task, but I hope you'll think about Firebase Functions next time you are starting a project.

Thumbnail Image of Tutorial Building an API using Firebase Functions for cheap

5 Minute Intro to REST APIs

Learn the basics of APIs and what makes an API "RESTful"So what is an API (Application Programming Interface)? It's pretty simple from a high-level — two applications interfacing (or “talking”) with each other via programs. APIs also make our lives easier by abstracting away complex code and replacing it with simpler, more straightforward code. For example, think about what happens when you open a music streaming app and play a song. There's probably some code that runs when you hit play that looks something like this: That play() function is an API method that abstracts away things like: As developers, we don't need to know or care about the lower-level code when we call the play() method, and that's fine! APIs are here to make our lives easier. In JavaScript, APIs are usually based on objects . In the example above, the mediaPlayer object is the entry point for the API, and play() is the API method . Let's take a look at a more common example you've probably encountered before. If you've ever written JavaScript like this: then congrats, you've used the DOM (Document Object Model) API ! Here, the document object is being used as the entry point for the DOM API, and querySelector() is the API method . In the web development world, APIs generally fall into two categories: So if an API is a set of rules that allow one piece of software to talk to another, then REST (Representational State Transfer) is a standard set of rules that developers follow to create an API. Six conditions that need to be met for an API to be considered RESTful: I think you’d be hard-pressed to find anyone who remembers the definition of each of these conditions off the top of their head, so the gist of it is this: In a nutshell, you should be able to receive a piece of data (a.k.a.  resource ) when you access a predefined URL. Each of these URLs, also known as “ endpoints ", are part of what is called a request , and the data you receive from the request is called a response . The data provided by these responses are usually in the form of JSON ( JavaScript Object Notation ). A RESTful request is composed of four parts: Let’s take a quick look at what each of these four parts are. The "endpoint" is the URL you request. All REST APIs have a base URL (also known as the "root" endpoint). For example, the base URL of Spotify's API is https://api.spotify.com while the base URL of GitHub's API is https://api.github.com . Additionally, each endpoint in a REST API has a path that determines the resource being requested. In the example endpoint above, https://api.spotify.com/v1 is the base URL, and /playlists/59ZbFPES4DQwEjBpWHzrtC is the path to a particular playlist. The HTTP method is the type of request you send to the server. There are five common types of HTTP methods: GET , POST , PUT , PATCH , and DELETE . They are used to perform four possible operations: Create, Read, Update, or Delete (CRUD). When working with a REST API, the documentation should tell you which HTTP method to use with each request. Headers are key-value pairs that are used to provide information to both the client and server. You can store things like authentication tokens, cookies, or other metadata in HTTP headers.  For example, the header below tells the server to expect JSON content. You can find a complete list of valid headers on MDN’s HTTP Headers Reference . The "body" of a request contains any information you want to send to the server and is normally transmitted in the HTTP request's body by sending a single JSON-encoded data string . This information is usually the data you'd like to create a new resource with (with a POST request) or the modified value of a resource you'd like to update (with a PUT or PATCH request).  Now that we know the parts of an HTTP request, let’s take a look at what is returned in an HTTP response: the status code and the body data. Responses always include an HTTP status code in the response header. These status codes are numbers that indicate whether a request was successfully completed and range from the 100s to the 500s. Similar to HTTP requests, responses usually return body data in the form of JSON . To wrap it all up, let’s take a look at a real-world example with the Spotify API. Spotify’s Web API, which has a base URL of https://api.spotify.com/v1 , has an endpoint that lets you get a playlist by its ID . If you take a look at the docs , you'll see that the HTTP request’s endpoint ( https://api.spotify.com/v1/playlists/{playlist_id} ) requires a GET   HTTP method and must have a valid Authorization header . It also requires a path parameter of the playlist’s ID. So if you send a  GET request to https://api.spotify.com/v1/playlists/59ZbFPES4DQwEjBpWHzrtC , the API's response will include body data that look like this: This JSON includes things like the playlist’s number of followers, the URL of the cover image, the playlist’s name, and much more. Now that you have a solid understanding of REST APIs, try interacting with one yourself! Be sure to check out our new course being released soon, Build a Spotify Connected App , where we apply these concepts to build a real-world, full stack web app!

Thumbnail Image of Tutorial 5 Minute Intro to REST APIs

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Visualizing Geographic SQL Data on Google Maps

Analytics dashboards display different data visualizations to represent and convey data in ways that allow users to quickly digest and analyze information. Most multivariate datasets consumed by dashboards include a spatial field/s, such as an observation's set of coordinates (latitude and longitude). Plotting this data on a map visualization contextualizes the data within a real-world setting and sheds light on spatial patterns that would otherwise be hidden in the data. Particularly, seeing the distribution of your data across an area connects it to geographical features and area-specific data (i.e., neighborhood/community demographics) available from open data portals. The earliest example of this is the 1854 cholera visualization by John Snow , who marked cholera cases on a map of London's Soho and uncovered the source of the cholera outbreak by noticing a cluster of cases around a water pump. This discovery helped to correctly identify cholera as a waterborne disease and not as an airbourne disease. Ultimately, it changed how we think about disease transmission and the impact our surroundings and environment have on our health. If your data consists of spatial field/s, then you too can apply the simple technique of plotting markers on a map to extrapolate valuable insight from your own data. Map visualizations are eye-catching and take on many forms: heatmaps, choropleth maps, flow maps, spider maps, etc. Although colorful and aesthetically pleasing, these visualizations provide intuitive controls for users to navigate through their data with little effort. To create a map visualization, many popular libraries (e.g., Google Maps API and deck.gl ) support drawing shapes, adding markers and overlaying geospatial visualization layers on top of a set of base map tiles. Each layer generates a pre-defined visualization based on a collection of data. It associates each data point with certain attributes (color, size, etc.) and renders them on to a map. By pairing a map visualization library with React.js, developers can build dynamic map visualizations and embed them into an analytics dashboard. If the visualizations' data comes from a PostgreSQL database, then we can make use of PostGIS geospatial functions to help answer interesting questions related to spatial relationships, such as which data points lie within a 1 km. radius of a specific set of coordinates. Below, I'm going to show you how to visualize geographic data queried from a PostgreSQL database on Google Maps. This tutorial will involve React.js and the @react-google-maps/api library, which contains React.js bindings and hooks to the Google Maps API, to create a map visualization that shows the location of data points. To get started, clone the following two repositories: The first repository contains a Create React App with TypeScript client-side application that displays a query builder for composing and sending queries and a table for presenting the fetched data. The second repository contains a multi-container Docker application that consists of an Express.js API, a PostgreSQL database and pgAdmin. The Express.js API connects to the PostgreSQL database, which contains a single table named cp_squirrels seeded with 2018 Central Park Squirrel Census data from the NYC Open Data portal. Each record in this dataset represents a sighting of an eastern gray squirrel in New York City's Central Park in the year 2018. When a request is sent to the API endpoint POST /api/records , the API processes the query attached as the body of the request and constructs a SQL statement from it. The pg client executes the SQL statement against the PostgreSQL database, and the API sends back the result in the response. Once it receives this response, the client renders the data to the table. To run the client-side application, execute the following commands within the root of the project's directory: Inside of your browser, visit this application at http://localhost:3000/ . Before running the server-side application, add a .env.development file with the following environment variables within the root of the project's directory: ( .env.development ) To run the server-side application, execute the following commands within the root of the project's directory: Currently, the client-side application only displays the data within a table. For it to display the data within a map visualization, we will need to install several NPM packages: The Google Maps API requires an API key, which tracks your map usage. It provides a free quota of Google Map queries, but once you exceed the quota, you will be billed for the excessive usage. Without a valid API key, Google Maps fails to load: The process of generating an API key involves a good number of steps, but it should be straight-forward. First, navigate to your Google Cloud dashboard and create a new project. Let's name the project "react-google-maps-sql-viz." Once the project is created, select this project as the current project in the notifications pop-up. This reloads the dashboard with this project now selected as the current project. Now click on the "+ Enable APIs and Services" button. Within the API library page, click on the "Maps JavaScript API" option. Enable the Maps JavaScript API. Once enabled, the dashboard redirects you to the metrics page of the Maps JavaScript API. Click the "Credentials" option in the left sidebar. Within the "Credentials" page, click the "Credentials in APIs & Services" link. Because this is a new project, there should be zero credentials listed. Click the "+ Create Credentials" button, and within the pop-up dropdown, click the "API key" option. This will generate an API key with default settings. Copy the API key to your clipboard and close the modal. Click on the pencil icon to rename the API key and restrict it to our client-side application. Rename API key to "Google Maps API Key - Development." This key will be reserved for local development and usage metrics recorded during local development will be tied to this single key. Under the "Application Restrictions" section, select the "HTTP referrers (web sites)" option. Below, the "Website restrictions" section appears. Click the "Add an Item" button and enter the referrer " http://localhost:3000/* " as a new item. This ensures our API key can only be used by applications running on http://localhost:3000/ . This key will be invalid for other applications. Finally, under the "API Restrictions" -> "Restrict Key" section, select the "Maps JavaScript API" option in the <select /> element for this key to only allow access to the Google Maps API. All other APIs are off limits. After you finish making these changes, press the "Save" button. Note: Press the "Regenerate Key" button if the API key is compromised or accidentally leaked in a public repository, etc. The dashboard redirects you back to the "API & Services" page, which now displays the updated API key information. Also, don't forget to enable billing! Otherwise, the map tiles fail to load: When you create a billing account and link the project to the billing account, you must provide a valid credit/debit card. When running the client-side application in different environments, each environment supplies a different set of environment variables to the application. For example, if you decide to deploy this client-side application live to production, then you would provide a different API key than the one used for local development. The API key used for local development comes with its own set of restrictions, such as only being valid for applications running on http://localhost:3000/ , and collects metrics specific to local development. For local development, let's create a .env file at the root of the client-side application's project directory. For environment variables to be accessible by Create React App, they must be prefixed with REACT_APP . Therefore, let's name the API key's environment variable REACT_APP_GOOGLE_MAPS_API_KEY , and set it to the API key copied to the clipboard. Let's start off by adding a map to our client-side application. First, import the following components and hooks from the @react-google-maps/api library: ( src/App.tsx ) Let's destructure out the API key's environment variable from process.env : ( src/App.tsx ) Establish where the map will center. Because our dataset focuses on squirrels within New York City's Central Park, let's center the map at Central Park. We will be adding a marker labeled "Central Park" at this location. ( src/App.tsx ) Within the <App /> functional component, let's declare a state variable that will hold an instance of our map in-memory. For now, it will be unused. ( src/App.tsx ) Call the useJsApiLoader hook with the API key and an ID that's set as an attribute of the Google Maps API <script /> tag. Once the API has loaded, isLoaded will be set to true , and we can then render the <GoogleMap /> component. ( src/App.tsx ) Currently, TypeScript doesn't know what the type of our environment variable is. TypeScript expects the googleMapsApiKey option to be set to a string, but it has no idea if the REACT_APP_GOOGLE_MAPS_API_KEY environment variable is a string or not. Under the NodeJS namespace, define the type of this environment variable as a string within the ProcessEnv interface. ( src/react-app-env.d.ts ) Beneath the table, render the map. Only render the map once the Google Maps API has finished loading. Pass the following props to the <GoogleMap /> component: Here, we set the center of the map to Central Park and set the zoom level to 14. Within the map, add a marker at Central Park, which will physically mark the center of the map. ( src/App.tsx ) The onLoad function will set the map instance in state while the onUnmount function will wipe the map instance from state. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Within your browser, visit the application at http://localhost:3000/ . When the application loads, a map is rendered below the empty table. At the center of this map is marker, and when you hover over this marker, the mouseover text shown will be "Central Park." Suppose we send a query requesting for all squirrel observations that involved a squirrel with gray colored fur. When we display these observations as rows within a table, answering questions like "Which section of Central Park had the most observations of squirrels with gray colored fur?" becomes difficult. However, if we populate the map with markers of these observations, then answering this question becomes easy because we will be able to see where the markers are located and identify clusters of markers. First, let's import the <InfoWindow /> component from the @react-google-maps/api library. Each <Marker /> component will have an InfoWindow, which displays content in a pop-up window (in this case, it acts as a marker's tooltip), and it will only be shown only when the user clicks on a marker. ( src/App.tsx ) Since each observation ("record") will be rendered as a marker within the map, let's add a Record interface that defines the shape of the data representing these observations mapped to <Marker /> components. ( src/App.tsx ) We only want one InfoWindow to be opened at any given time. Therefore, we will need a state variable to store an ID of the currently opened InfoWindow. ( src/App.tsx ) Map each observation to a <Marker /> component. Each <Marker /> component has a corresponding <InfoWindow /> component. When a marker is clicked on by the user, the marker's corresponding InfoWindow appears with information about the color of the squirrel's fur for that single observation. Since every observation has a unique ID, only one InfoWindow will be shown at any given time. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Within the query builder, add a new rule by clicking the "+Rule" button. Set this rule's field to "Primary Fur Color" and enter "Gray" into the value editor. Keep the operator as the default "=" sign. When this query is sent to the Express.js API's POST /api/records endpoint, it produces the condition primary_fur_color = 'Gray' for the SQL statement's WHERE clause and will fetch all of the observations involving squirrels with gray-colored fur. Press the "Send Query" button. Due to the high number of records returned by the API in the response, the browser may freeze temporarily to render all the rows in the table and markers in the map. Once the browser finishes rendering these items, notice how there are many markers on the map and no discernable spatial patterns in the observations. Yike! For large datasets, rendering a marker for each individual observation causes massive performance issues. To avoid these issues, let's make several adjustments: Define a limit on the number of rows that can be added to the table. ( src/App.tsx ) Add a state variable to track the number of rows displayed in the table. Initialize it to five rows. ( src/App.tsx ) Anytime new data is fetched from the API as a result of a new query, reset the number of rows displayed in the table back to five rows. ( src/App.tsx ) Using the slice method, we can limit the number of rows displayed in the table. It is increased by five each time the user clicks the "Load 5 More Records" button. This button disappears once all of the rows are displayed. ( src/App.tsx ) To render a heatmap layer, import the <HeatmapLayer /> component and tell the Google Maps API to load the visualization library . For the libraries option to be set to LIBRARIES , TypeScript must be reassured that LIBRARIES will only contain specific library names. Therefore, import the Libraries type from @react-google-maps/api/dist/utils/make-load-script-url and annotate LIBRARIES with this type. ( src/App.tsx ) ( src/App.tsx ) ( src/App.tsx ) ( src/App.tsx ) Pass a list of the observations' coordinate points to the <HeatmapLayer /> component's data prop. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Save the changes and re-enter the same query into the query builder. Now the table displays information only the first five observations of the fetched data, and the heatmap visualization clearly distinguishes the areas with no observations and the areas with many observations. Click here for the final version of this project. Click here for the final version of this project styled with Tailwind CSS . Try visualizing the data with other Google Maps layers.

Thumbnail Image of Tutorial Visualizing Geographic SQL Data on Google Maps

Deploying a Node.js and PostgreSQL Application to Heroku

Serving a web application to a global audience requires deploying, hosting and scaling it on reliable cloud infrastructure. Heroku is a cloud platform as a service (PaaS) that supports many server-side languages (e.g., Node.js, Go, Ruby and Python), monitors application status in a beautiful, customizable dashboard and maintaining an add-ons ecosystem for integrating tools/services such as databases, schedulers, search engines, document/image/video processors, etc. Although it is built on AWS, Heroku is simpler to use compared to AWS. Heroku automatically provisions resources and configures low-level infrastructure so developers can focus exclusively on their application without the additional headache of manually setting up each piece of hardware and installing an operating system, runtime environment, etc. When deploying to Heroku, Heroku's build system packages the application's source code and dependencies together with a language runtime using a buildpack and slug compiler to generate a slug , which is a highly optimized and compressed version of your application. Heroku loads the slug onto a lightweight container called a dyno . Depending on your application's resource demands, it can be scaled horizontally across multiple concurrent dynos. These dynos run on a shared host, but the dynos responsible for running your application are isolated from dynos running other applications. Initially, your application will run on a single web dyno, which serves your application to the world. If a single web dyno cannot sufficiently handle incoming traffic, then you can always add more web dynos. For requests exceeding 500ms to complete, such as uploading media content, consider delegating this expensive work as a background job to a worker dyno. Worker dynos process these jobs from a job queue and run asynchronously to web dynos to free up the resources of those web dynos. Below, I'm going to show you how to deploy a Node.js and PostgreSQL application to Heroku. First, let's download the Node.js application by cloning the project from its GitHub repository: Let's walkthrough the architecture of our simple Node.js application. It is a multi-container Docker application that consists of three services: an Express.js server, a PostgreSQL database and pgAdmin. As a multi-container Docker application orchestrated by Docker Compose , the PostgreSQL database and pgAdmin containers are spun up from the postgres and dpage/pgadmin4 images respectively. These images do not need any additional modifications. ( docker-compose.yml ) The Express.js server, which resides in the api subdirectory, connects to the PostgreSQL database via the pg PostgreSQL client. The module api/lib/db.js defines a Database class that establishes a reusable pool of clients upon instantiation for efficient memory consumption. The connection string URI follows the format postgres://[username]:[password]@[host]:[port]/[db_name] , and it is accessed from the environment variable DATABASE_URL . Anytime a controller function (the callback argument of the methods app.get , app.post , etc.) calls the query method, the server connects to the PostgreSQL database via an available client from the pool. Then, the server queries the database, directly passing the arguments of the query method to the client.query method. Once the database sends the requested data back to the server, the client is released back to the pool, available for the next request to use. Additionally, there's a getAllTables method for retrieving low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . ( api/lib/db.js ) The table cp_squirrels is seeded with records from the 2018 Central Park Squirrel Census dataset downloaded from the NYC Open Data portal. The dataset, downloaded as a CSV file, contains the fields obs_date (observation date) and lat_lng (coordinates of observation) with values that are not compatible with the PostgreSQL data types DATE and POINT respectively. Instead of directly copying the contents of the CSV file to the cp_squirrels table, copy from the output of a GNU awk ("gawk") script. This script... ( db/create.sql ) Upon the initialization of the PostgreSQL database container, this SQL file is ran by adding it to the docker-entrypoint-initdb.d directory. ( db/Dockerfile ) This server exposes a RESTful API with two endpoints: GET /tables and POST /api/records . The GET /tables endpoint simply calls the db.getAllTables method, and the POST /api/records endpoint retrieves data from the PostgreSQL database based on a query object sent within the incoming request. To bypass CORS restrictions for clients hosted on a different domain (or running on a different port on the same machine) sending requests to this server, all responses must have the Access-Control-Allow-Origin header set to the allowable domain ( process.env.CLIENT_APP_URL ) and the Access-Control-Allow-Headers header set to Origin, X-Requested-With, Content-Type, Accept . ( api/index.js ) Notice that the Express.js server requires three environment variables: CLIENT_APP_URL , PORT and DATABASE_URL . These environment variables must be added to Heroku, which we will do later on in this post. The Dockerfile for the Express.js server instructs how to build the server's Docker image based on its needs. It automates the process of setting up and running the server. Since the server must run within a Node.js environment and relies on several third-party dependencies, the image must be built upon the node base image and install the project's dependencies before running the server via the npm start command. ( api/Dockerfile ) However, because the filesystem of a Heroku dyno is ephemeral , volume mounting is not supported. Therefore, we must create a new file named Dockerfile-heroku that is dedicated only to the deployment of the application to Heroku and not reliant on a volume. ( api/Dockerfile-heroku ) Unfortunately, you cannot deploy a multi-container Docker application via Docker Compose to Heroku. Therefore, we must deploy the Express.js server to a web dyno with Docker and separately provision a PostgreSQL database via Heroku Postgres add-on . To deploy an application with Docker, you must either: For this tutorial, we will deploy the Express.js server to Heroku by building a Docker image with heroku.yml and deploying this image to Heroku. Let's create a heroku.yml manifest file inside of the api subdirectory. Since the Express.js server will be deployed to a web dyno, we must specify the Docker image to build for the application's web process, which the web dyno belongs to: ( api/heroku.yml ) Because our api/Dockerfile already has a CMD instruction, which specifies the command to run within the container, we don't need to add a run section. Let's add a setup section, which defines the environment's add-ons and configuration variables during the provisioning stage. Within this section, add the Heroku PostgreSQL add-on. Choose the free " Hobby Dev " plan and give it a unique name DATABASE . This unique name is optional, and it is used to distinguish it from other Heroku PostgreSQL add-ons. Fortunately, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable, which contains the database connection information for this newly provisioned database, will be made available to our application. Check if your machine already has the Heroku CLI installed. If not yet installed, then install the Heroku CLI. For MacOSX, it can be installed via Homebrew: For other operating systems, follow the instructions here . After installation, For the setup section of the heroku.yml manifest file to be recognized and used for creating a Heroku application, switch to the beta update channel and install the heroku-manifest plugin: Without this step, the PostgreSQL database add-on will not be provisioned from the heroku.yml manifest file. You would have to manually provision the database via the Heroku dashboard or heroku addons:create command. Once installed, close out the terminal window and open a new one for the changes to take effect. Note : To switch back to the stable update stream and uninstall this plugin: Now, authenticate yourself by running the follow command: Note : If you want to remain within the terminal, as in entering your credentials directly within the terminal, then add the -i option after the command. This command prompts you to press any key to open a login page within a web browser. Enter your credentials within the login form. Once authenticated, Heroku CLI will automatically log you in. Within the api subdirectory, create a Heroku application with the --manifest flag: This command automatically sets the stack of the application to container and sets the remote repository of the api subdirectory to heroku . When you visit the Heroku dashboard in a web browser, this newly created application is listed under your "Personal" applications: Set the configuration variable CLIENT_APP_URL to a domain that should be allowed to send requests to the Express.js server. Note : The PORT environment variable is automatically exposed by the web dyno for the application to bind to. As previously mentioned, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable will automatically be exposed. Under the application's "Settings" tab in the Heroku Dashboard, you can find all configuration variables set for your application under the "Config Vars" section. Create a .gitignore file within the api subdirectory. ( api/.gitignore ) Commit all the files within the api subdirectory: Push the application to the remote Heroku repository. The application will be built and deployed to the web dyno. Ensure that the application has successfully deployed by checking the logs of this web dyno: If you visit https://<application-name>.herokuapp.com/tables in your browser, then a successful response is returned and printed to the browser. In case the PostgreSQL database is not provisioned, manually provision it using the following command: Then, restart the dynos for the DATABASE_URL environment variable to be available to the Express.js server at runtime. Deploy your own containerized applications to Heroku!

Thumbnail Image of Tutorial Deploying a Node.js and PostgreSQL Application to Heroku

React Query Builder - The Ultimate Querying Interface

From businesses looking to optimize their operations, data influences the decisions being made. For scientists looking to validate their hypotheses, data influences the conclusions being arrived at. Regardless, the sheer amount of data collected and harnessed from various sources presents the challenge of identifying rising trends and interesting patterns hidden within this data. If the data is stored within an SQL database, such as PostgreSQL , querying data with the expressive power of the SQL language unlocks the data's underlying value. Creating interfaces to fully leverage the constructs of SQL in analytics dashboards can be difficult if done from scratch. With a library like React Query Builder , which contains a query builder component for fetching and exploring rows of data with the exact same query and filter rules provided by the SQL language, we can develop flexible, customizable interfaces for users to easily access data from their databases. Although there are open source, administrative tools like pgAdmin , these tools cannot be integrated directly into a custom analytics dashboard (unless embedded within an iframe). Additionally, you would need to manage more user credentials and permissions, and these tools may be considered too overwhelming or technical for users who aren't concerned with advanced features, such as a procedural language debugger, and intricate back-end and database configurations. By default, the <QueryBuilder /> component from the React Query Builder library contains a minimal set of controls only for querying data with pre-defined rules. Once the requested data is queried, this data can then be summarized by rendering it within a data visualization, such as a table or a line graph. Below, I'm going to show you how to integrate the React Query Builder library into your application to gain insights into your data. To get started, scaffold a basic React project with the Create React App and TypeScript boilerplate template. Inside of this project's root directory, install the react-querybuilder dependency: If you happen to run into the following TypeScript error... Could not find a declaration file for module 'react'. '<project-name>/node_modules/react/index.js' implicitly has an 'any' type. ... then add the "noImplicitAny": false configuration under compilerOptions inside of tsconfig.json to resolve it. React Query Builder composes a query from the rules or groups of rules set within the query builder interface. This query, in JSON form, should be sent to a server-side application that's connected to a PostgreSQL database to properly format the query into a SQL statement and execute the statement to fetch records of data from the database. For this tutorial, we will send this query to an Express.js API running within a multi-container Docker application. This application also runs a PostgreSQL database and the pgAdmin in separate containers. The API connects to the PostgreSQL database and defines a POST route for processing the query. With Docker Compose, you can execute a single command to spin up all of these services at once on a single host machine! To run the entire back-end, you don't need to manually install PostgreSQL or pgAdmin on your machine; you only need Docker installed on your machine. Plus, if you decide to run other services, such as NGINX or Redis , then you can add them within the docker-compose.yml configuration file. Clone the following repository: Inside the root this cloned project, add a .env.development file with the following environment variables: To run the server-side application, execute the following command: This command starts up the server-side application. When you re-build and restart the application with this same command, it will do so from scratch with the latest images. It's up to you if you want to leverage caching to expedite the build and start up processes. Nevertheless, let's break down what this command does: For each docker-compose command, pass a set of environment variables via the --env-file option. This approach in setting environment variables allows these variables to be accessed within the docker-compose.yml file and easily works in a CI/CD pipeline. Since the .env.<environment> files are typically not pushed to the remote repository (i.e., ignored by Git), especially for public-facing projects, when deploying this project to a cloud platform, the environment variables set within the platform's dashboard function the same way as those set by the --env-file option. The PostgreSQL database contains only one table named cp_squirrels that is seeded with 2018 Central Park Squirrel Census data downloaded from the NYC Open Data portal. Each record represents a sighting of an eastern gray squirrel in New York City's Central Park in the year 2018. Let's verify that pgAdmin is running by visiting localhost:5050 in the browser. Here, you will be presented a log-in page. Enter your credentials ( NYCSC_PGADMIN_EMAIL and NYCSC_PGADMIN_PASSWORD ) into the log-in form. On the pgAdmin welcome page, right-click on "Servers" in the "Browser" tree control (in the left pane) and in the dropdown, click Create > Server . Under "General," set the server name to nyc_squirrels . Under "Connection," set the host name to nycsc-pg-db , the container name set for our nycsc-pg-db . It is where our PostgreSQL database is virtually hosted at on our local machine. Set the username and password to the values of NYCSC_PGADMIN_EMAIL and NYCSC_PGADMIN_PASSWORD respectively. Save those server configurations. Wait for pgAdmin to connect to the PostgreSQL database. Once connected, it should appear under the "Browser" tree control. Right-click on the database ( nyc_squirrels ) in the "Browser" tree control and in the dropdown, click the Query Tool option. Inside of the query editor, type a simple SQL statement to verify that the database has been properly seeded: This statement should return the first ten records of the cp_squirrels table. Let's verify that the Express.js API is running by visiting localhost:<NYCSC_API_PORT>/tables in the browser. The browser should display low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . Great! With the server-side working as intended, let's turn our attention back to integrating the React Query Builder component into the client-side application. Inside of our Create React App project's src/App.tsx file, import the <QueryBuilder /> component from the React Query Builder library. At a minimum, this component accepts two props: This is what the query builder looks like without any styling and with only these two props passed to the <QueryBuilder /> component: This probably doesn't make much sense, so let's immediately jump into a basic example to better understand the capabilities of this component. Let's make the following adjustments to the src/App.tsx file to create a very basic query builder: Open the application within your browser. The following three element component is shown in the browser: The first element is the combinator selector , which is a <select /> element that contains two options: AND and OR . These options correspond to the AND and OR operators of a SQL statement's WHERE clause. The second element is the add rule action , which is a <button /> element ( +Rule ) that when pressed will add a rule. If you press this button, then a new rule is rendered beneath the initial query builder component: A rule consists of a field , an operator and a value editor , and it corresponds to a condition specified in a SQL statement's WHERE clause. The field <select /> element lists all of the fields passed into the fields prop. Notice that the label of the field is shown in this element. The operator <select /> element lists all of the possible comparison/logical operators that can be used in a condition. Lastly, the value editor <input /> element contains what the field will be compared to. For example, if we type -73.9561344937861 into the <input /> field, then the condition that will be specified in the WHERE clause is X = -73.9561344937861 . Basically, this will fetch all squirrel sightings located at the longitudinal value of -73.9561344937861 . With only one rule, the combinator selector is not applicable. However, if we press the add rule action button again, another rule will be rendered, and the combinator selector will become applicable. With two rules, two conditions are specified and combined with the AND operator: X = -73.9561344937861 AND Y = 40.7940823884086 . The third element is the add group action , which is a <button /> element ( +Group ) that when pressed will add an empty group of rules. If you press this button, then a new group is rendered beneath whatever has already been rendered in the query builder component: Currently, there are no rules within the newly created group. When we add two new rules to this group by pressing its add rule action button twice and change the value of its combinator selector to OR , like so: The two rules within this new group are combined together similar to placing parentheses around certain conditions in a WHERE clause to give a higher priority to them during evaluation. For the above case, the overall condition specified to the WHERE clause would be X = -73.9561344937861 AND Y = 40.7940823884086 AND (X = -73.9688574691102 OR Y = 40.7837825208444) . A total of eight fields are defined. Essentially, they are based on the columns of the cp_squirrels table. For each field, the name property corresponds to the actual column name, and the label property corresponds a more presentable column title that is shown in the field <select /> element of each rule. If you look into developer tools console, then you will see many query objects logged to the console: Every single action performed on the query builder that changes the query will invoke the logQuery function, which prints the query to the console. If we import the formatQuery function from the react-querybuilder library and call it inside of logQuery with the query, then we can format the query in many different ways. For now, let's format the query to a SQL WHERE clause: ( src/App.tsx ) If we modify any of the controls' values, then both the query (in its raw object form) and its formatted string (as a condition of a WHERE clause) are printed to the console: With the fundamentals out of the way, let's focus on sending the query to our Express.js API to fetch data from our PostgreSQL database. Inside of src/App.tsx , let's add a "Send Query" button below the <QueryBuilder /> component: Note : The underscore prefix of the _evt argument indicates an unused argument. When the user clicks this button, the client will send the most recent query to the /api/records endpoint of the Express.js API. This endpoint takes the query, formats it into a SQL statement, executes this SQL statement and responds back with the result table. We will need to store the query inside a state variable to allow other functions, such as , within the <App /> component to access the query. This changes our uncontrolled component to a controlled component . ( src/App.tsx ) Anytime onQueryChange is invoked, the setUpdateQuery method will update the value of the updateQuery variable, which must adhere to the type RuleGroupType . Update the sendQuery function to send updateQuery to the /api/records endpoint and log the data in the response. ( src/App.tsx ) Inside of the query builder, if we want retrieve squirrel sightings found at the coordinates (40.7940823884086, -73.9561344937861), then create two rules: one for X (longitude) and one for Y (latitude). When we press the "Send Query" button, the result table (in JSON) is printed to the console: Only one squirrel sighting was observed at that particular set of coordinates. Let's display the result table in a simple table: ( src/App.tsx ) Press the "Send Query" button again. The result table (with only one record) should be displayed within a table. The best part is you can add other visualization components to display your fetched data. The sky's the limit! Click here for the final version of this project. Visit the React Query Builder to learn more about how you can customize it to your application's needs.

Thumbnail Image of Tutorial React Query Builder - The Ultimate Querying Interface

Writing Retrowave in Angular

Web Audio API has been around for a while now and there are lots of great articles about it. So I will not go into details regarding the API. What I will tell you is Web Audio can be Angular's best friend if you introduce it well. So let's do this.In Web Audio API you create a graph of audio nodes that process the sound passing through them. They can change volume, introduce delay or distort the signal. Browsers have special AudioNodes with various parameters to handle this. Initially, one would create them with factory functions of AudioContext : But since then they became proper constructors which means you can extend them. This allows us to elegantly and declaratively use Web Audio API in Angular. Angular directives are classes and they can extend existing native classes. Typical feedback loop to create echo effect with Web Audio looks like this: We can see that vanilla code is purely imperative. We create objects, set parameters, manually assemble the graph using connect method. In the example above we use HTML audio tag. When user presses play he would hear echo on his audio file. We will replicate this case using directives. AudioContext will be delivered through Dependency Injection. Both GainNode and DelayNode have only one parameter each — gain and delay time. That is not just a number, it is an AudioParam . We will see what that means a bit later. To declaratively link our nodes into graph we will add AUDIO_NODE token. All our directives will provide it. Directives take closest node from DI and connect with it. We've also added exportAs — it allows us to grab node with template reference variables . Now we can build graph with template: We will end a branch and direct sound to the speakers with waAudioDestinationNode : To be able to create loops like in the echo example above Dependency Injection is not enough. We will make a special directive. It would allow us to pass node as input to connect to it: Both those directives extend GainNode which creates an extra node in the graph. It allows us to disconnect it in ngOnDestroy easily. We do not need to remember everything that is connected to our directive. We can just disconnect this from everything at once. The last directive we need to complete our example is a bit different. It's a source node and it's always at the top of our graph. We will put a directive on audio tag and it will turn it into MediaElementAudioSourceNode for us: Now let's create the echo example with our directives: There are lots of different nodes in Web Audio API but all of them can be implemented using similar approach. Two other important source nodes are OscillatorNode and AudioBufferSourceNode . Often we do not want to add anything into DOM. And there is no need to provide audio file controls to the user. In that case AudioBufferSourceNode is a better option than audio tag. Only inconvenience is — it works with AudioBuffer unlike audio tag which takes a link to an audio asset. We can create a service to mitigate that: Now we can create a directive that works both with AudioBuffer and audio asset URL: Audio nodes have a special kind of properties — AudioParam . For example gain in GainNode . That's why we used setter for it. Such property value can be automated. You can set it to change linearly, exponentially or even over an array of values in a given time. We need some sort of handler which would allow us to take care of this for all such inputs of our directives. Decorator is a good option for this case: Decorator would pass processing to a dedicated function: Strong types will not allow us to accidentally use it for a non-existent parameter. So what would AudioParamInput type look like? Besides number it would include an automation object: processAudioParam function translates those objects into native API commands. It's pretty boring so I will just describe the principle. If current value is 0 and we want it to linearly change to 1 in a second we would pass {value: 1, duration: 1, mode: ‘linear’} . For complex automation we would also need to support an array of such objects. We would typically pass an automation object with short duration instead of plane number . It prevents audible clicking artifacts when parameter changes abruptly. But it's not convenient to do it manually all the time. Let's create a pipe that would take target value, duration and optional mode as arguments: Besides, AudioParam can be automated by connecting an oscillator to it. Usually a frequency lower than 1 is used and it is called an LFO — Low Frequency Oscillator. It can create movement in sound. In example below it adds texture to otherwise static chords. It modulates frequency of a filter they pass through. To connect oscillator to a parameter we can use our waOutput directive. We can access node thanks to exportAs : Web Audio API can be used for different things. From real time processing of a voice for a podcast to math computations, Fourier transforms and more. Let's compose a short music piece using our directives: We will start with simple task — straight drum beat. To count beats we will create a stream and add it to DI: We have 4 beats per measure. Let's map our stream: Now it gives us true in the beginning and false in the middle of each bar. We would use it to play audio samples: Now let's add melody. We will use numbers to indicate notes where 69 means middle A note. Function that translates this number to frequency can be easily found on Wikipedia. Here's our tune: Our component will play right frequency for each note each beat: And inside its template we will have a real synthesizer! But first we need another pipe. It would automate volume with ADSR-envelope. That means "Attack, Decay, Sustain, Release" and here's how it looks: In our case we need for the sound to quickly start and then fade away. Pipe is rather simple: Now we will use it for our synth tune: Let's figure out what's going on here. We have two oscillators. First one is just a sine wave passed through ADSR pipe. Second one is same echo loop we've seen, except this time it passes through ConvolverNode . It creates room acoustics using impulse response. It's a big an interesting subject of its own, but it is outside this article's scope. All other tracks in our song are made similarly. Nodes are connected to each other, parameters are automated with LFOs or change smoothly via our pipes. I only went over a small portion of this subject, simplifying corner cases. We've made a complete conversion of Web Audio API into declarative Angular open-source library @ng-web-apis/audio . It covers all the nodes and features. This library is a part of a bigger project called Web APIs for Angular — an initiative with a goal of creating lightweight, high quality wrappers of native APIs for idiomatic use with Angular. So if you want to try, say, Payment Request API or play with your MIDI keyboard in browser — you are very welcome to browse all our releases so far .

Thumbnail Image of Tutorial Writing Retrowave in Angular