Tutorials on Typescript

Learn about Typescript from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Building a Choropleth Map with D3 and Svelte

In this article, we will create a data visualization that displays the ratio of Dunkin’ Donuts locations to Starbucks locations (by state) using D3 and Svelte. Which of America’s largest coffee chains keeps your state awake and ready for the 9-5 workday?Choropleth maps bring data to life. By projecting data onto a map, you can craft a captivating, visual narrative around your data that uncovers geographical patterns and insights. Choropleth maps color (or shade) geographical areas like countries and states based on numeric data values. The intensity of the color represents the magnitude of the data value in a specific geographical area. With a single glance, these colors allow us to easily identify regional hotspots, trends and disparities that are not immediately apparent from raw data. Think about the geographic distribution of registered Democrat and Republican voters across the United States. A state with an overwhelming majority of registered Democrat voters might be colored blue, whereas a state with an overwhelming majority of registered Republican voters might colored red. A state with a single-digit percentage difference between registered Democrat and Republican voters, such as Pennsylvania, would be colored blue purple. On the contrary, a state with a significantly larger ratio of registered Democrat voters to Republican voters, such as California, would be colored a more intense blue. Examining all of the states, you will recognize that registered Democrat voters primarily reside in states in the northeast region and along the western coastline of the United States. Choropleth maps let us answer geographic questions about our data and contextualize our data through the lens of our knowledge of the world. For example, looking at a choropleth map of registered Democrat and Republican voters in the United States on a state basis, it may make evident the differences in the laws and policies enacted across each state. Anyone who can read a map will have zero troubles navigating, understanding and deriving conclusions from choropleth maps. A common method for creating Choropleth maps for the web is D3, a popular JavaScript data visualization library. However, using just D3 to create choropleth maps comes with several downsides: And so, why not delegate the rendering logic to a declarative, UI framework like Svelte? Svelte surgically updates the DOM and produces highly optimized JavaScript code with zero runtime overhead. Additionally, Svelte components consist of three sections β€” script, styles and markup β€” to keep logic organized and consistent. By letting Svelte handle the rendering logic and D3 handle the data transformation logic (and difficult mathematical calculations), we can: Below, I'm going to show you how to build a choropleth map with D3 and Svelte. The choropleth map will display the ratio of Dunkin’ Donuts locations to Starbucks locations (by state). States with significantly more Starbucks locations than Dunkin’ Donuts locations will be colored green, and states with significantly more Dunkin’ Donuts locations than Starbucks locations will be colored orange. A legend will be added to map colors to magnitudes of location ratios. By the end of this tutorial, you will have built the following choropleth map: To set up a new Svelte project with Vite and TypeScript , run the command npm init vite . Note : You may generate a new Svelte application with SvelteKit, but this tutorial is only focused on building out a single Svelte component for the choropleth map. Therefore, it’s more preferred to use a lighter template so that you don’t need to mess around with extra project files. To obtain the number of Dunkin’ Donuts and Starbucks locations in those states, visit the following websites: And record the states and their location counts in two CSV files: dunkin_donuts_locations_counts.csv and starbucks_locations_counts.csv . Each CSV’s header row includes titles for two columns: state and count . The delimiter should be a comma. Then, within the public directory, create a data directory and place both CSV datasets in this new directory. To obtain a TopoJSON file of the geometries that represent US states, visit the U.S. Atlas TopoJSON GitHub repository ( https://github.com/topojson/us-atlas ). Then, scroll through the contents of the repository’s [README.md](http://README.md) file and download the states-albers-10m.json file. The state boundaries are drawn based on the 2017 edition of the Census Bureau’s cartographic state boundaries. Unlike the states-10m.json file, the geometries within this file have been projected to fit a 975 x 610 viewport. Once downloaded, rename the file as us_topojson.json and place it within the public/data directory. To create geographic features in an SVG canvas, D3 consumes GeoJSON data. Therefore, why are we downloading a TopoJSON file? TopoJSON is an extension of GeoJSON that eliminates redundancy in geometries via arcs . It’s more compact than GeoJSON (typically 80% smaller than their GeoJSON equivalents), and it preserves and encodes topology. For the choropleth map, it will download a TopoJSON file, not a GeoJSON file, of US states so that the choropleth map does not have to wait long . Then, we will leverage a module, topojson-client , to convert TopoJSON features to GeoJSON features for D3 to work with. For the choropleth map, we will need to install four specific D3 modules and a related module that’s also from the creator of D3: Run the following command to install these D3 modules and their type definitions in the Svelte project. First, delete the src/lib directory and src/app.css file. Then, in src/main.ts , omit the import './app.css' statement at the top of the file. In the src/App.svelte file, clear out the contents of the script, style and markup sections. Within the script section, let’s add the import statement for the <ChoroplethMap /> component and declare two variables: ( src/App.svelte ) Within the style section, let’s add some minor styles to horizontally center the <ChoroplethMap /> component in the <main /> element. ( src/App.svelte ) Note : Styles defined in the <App /> component won’t leak into other Svelte components. Within the <main /> element of the markup section, call the <ChoroplethMap /> component. Also, pass datasets to the datasets prop and colors to the colors prop of the ChoroplethMap /> component, like so: ( src/App.svelte ) Within the src directory, create a new folder named components . This folder will contain any reusable components used in this Svelte application. In this case, there will only be one component in this directory: ChoroplethMap.svelte . Create this file inside of the src/components directory. Within the src/components/ChoroplethMap.svelte file, begin with an empty script section for the <ChoroplethMap /> component: ( src/components/ChoroplethMap.svelte ) At the top of the script section, import several methods from the installed D3 modules: ( src/components/ChoroplethMap.svelte ) Then, declare the datasets and colors props that the <ChoroplethMap /> component currently accepts. Set their default values to empty arrays when no value is passed to either prop. ( src/components/ChoroplethMap.svelte ) d3-fetch comes with a convenient method for fetching and parsing CSV files: csv() . This method accepts, as arguments, a URL to a CSV dataset and a callback function that maps each row’s values to actual data values. For example, since numeric values in a CSV file will initially be represented as strings, they must be parsed as numbers. In our case, we want to parse count as a number. In a Svelte component, we will need to use the onMount lifecycle method to fetch data after the component gets rendered to the DOM for the first time, like so: For us to load both datasets, we can: Note : We’re flattening the returned data so that we can later group the data by state and calculate the location ratio on a per state basis. d3-fetch comes with a convenient method for fetching and parsing JSON files: json() . For the choropleth map, we will only want the method to accept, as an argument, a URL to the TopoJSON file with the geometry collection for US states. We will need to add this line of code to the onMount lifecycle method so that the TopoJSON data gets fetched alongside the CSV data, like so: To convert TopoJSON data to GeoJSON data, we will need to… Add these lines of code to the onMount lifecycle method, like so: Like with any D3 data visualization, you need to define its dimensions . Let’s define the choropleth map’s width, height and margins, like so: In the <ChoroplethMap /> component’s markup section, add an <svg /> element and set its width , height and viewBox using the values from dimensions . Within this <svg /> element, add a <g /> element that will group the <path /> elements that will represent the states and the internal borders between them. Back in the script section of the <ChoroplethMap /> component, create a new geographic path generator via the geoPath() method, like so: path is a function that turns GeoJSON data into a string that defines the path to be drawn for a <path /> element. In other words, this function, when called with stateMesh or a feature object from statesFeatures , will return a string that we can set to the d attribute of a <path /> element to render the internal borders between states or a state respectively. Here, we’ll render the internal borders between states and use an each block to loop over the feature objects in statesFeatures and render the states inside of the <g /> element, like so: Since stateMesh and statesFeatures are declared within the onMount lifecycle method, we’ll have to move the declarations to the top-level of the script section to ensure that these values can be used in the markup section of the <ChoroplethMap /> component. When you run the project in development via npm run dev , you should see a choropleth map that looks like the following: To adjust the fill color of each state by location ratio, first locally declare two variables at the top-level of the script section: Note : <string, string> corresponds to <Range, Output> . The Range generic represents the type of the range data. The Output generic represents the type of the output data (what’s outputted when calling scale() ). Within the onMount lifecycle method, using the d3-array 's rollup() method, group the data by state name, and map each state name to a ratio of Dunkin’ Donuts locations in the state to Starbucks locations in the state. Then, get the maximum location ratio from ratios via D3’s extent() method. Since the method only accepts an array as an argument, you will need to first convert the map to an array via Array.from() . Then, set scale to a linear scale that maps the ratios to colors passed into the colors prop. The max value corresponds to the first color in the colors list, an orange color. Any state that’s colored orange will indicate a higher ratio of Dunkin’ Donuts locations to Starbucks locations. Additionally, any state that’s colored green will indicate a lower ratio of Dunkin’ Donuts locations to Starbucks locations. A 1:1 ratio ( 1 in the domain) denotes an equal number of Dunkin’ Donuts locations to Starbucks locations. Note : A quantized scale would be better suited. However, the domain of scaleQuantize() accepts only two arguments, a minimum and maximum value. This means you cannot define your own threshold values ( scaleQuantize() automatically creates its own threshold values from the provided minimum and maximum values). Within the markup section of the <ChoroplethMap /> component, replace the currently set fill of "green" to scale(ratios.get(feature.properties.name)) . Upon saving these changes, you should see the colors of the states update. Wow, it seems Dunkin’ Donuts keeps the northeast of the US awake! The colors chosen for this data visualization are based on the official branding colors of Dunkin’ Donuts and Starbucks. For folks who might not be familiar with Dunkin’ Donuts and Starbucks official branding colors, let’s create a simple legend for the choropleth map so they know which states have a higher concentration of Starbucks locations and which states have a higher concentration of Dunkin’ Donuts locations. First, let’s locally declare a variable categories that maps datasets to an array that contains only the labels of the datasets. Then, create a new file in the src/components directory: Legend.svelte . This <Legend /> component will accept three props: dimensions , colors and categories . Given that we only want two labels for the legend, one for the first color in colors and one for the last color in colors , we create the labels by setting the first item in labels to categories[0] (”Dunkin’ Donuts”) and the last item in labels to categories[1] (”Starbucks”). Then, we leave the middle three labels undefined. This way, we can render the colors and labels one-to-one in the markup section. ( src/components/Legend.svelte ) Back in the <ChoroplethMap /> component, we can import the <Legend /> component and render it within the <svg /> element like so: Upon saving these changes, you should see the legend appear in the bottom-right corner of the choropleth map. Try customizing the choropleth map with your own location counts data. If you find yourself stuck at any point while working through this tutorial, then feel free to check out the project's GitHub repository or a live demo of this project in the following CodeSandbox: If you want to learn more about building visualizations with D3 and Svelte, then check out the Better Data Visualizations with Svelte course by Connor Rothschild, a partner and data visualization engineer at Moksha Data Studio.

Thumbnail Image of Tutorial Building a Choropleth Map with D3 and Svelte

Building a Bar Chart Race with D3 and Svelte

In this article, we will create a data visualization that animates the changes in the stargazer counts of popular front-end library/framework GitHub repositories over the past 15 years. Which front-end libraries/frameworks currently dominate the web development landscape? Which front-end libraries/frameworks used to dominate web development landscape?Bar chart races make boring bar charts dynamic and fun. Unlike regular bar charts, bar chart races show the growth and decline (the fluctuations) in the relative values of categories over time. Each bar represents a category, and the bar grows or shrinks in length with respect to its corresponding value at a given time and an ever-changing scale. The bars reposition themselves, commonly, in descending order of values. Depending on the maximum number of bars that can be shown in the bar chart race, you may occasionally see a bar drop off at or re-emerge from the bottom of the visualization. Due to the animation aspect of bar chart races (the racing effect created by animated bars), they have become popular in recent years on social media platforms. They turn vast amounts of complex data into a captivating, easy-to-digest medium. Bar chart races reveal trends that emerged or fell off across intervals of time. For example, if you created a bar chart race of browser usage over the 1990s to the present day, then you may initially see the rise of Internet Explorer, followed by its gradual decline as browsers like Chrome and Firefox became dominate forces in the browser market. D3 is great at tracking elements in an animation and animating enter and exit transitions. However, its imperative .join() approach to data-binding and managing enter, update and exit animations separately is not as expressive as Svelte’s declarative approach via reactivity, dynamic attributes (via curly braces) and built-in animation and transition directives. Below, I'm going to show you how to build a bar chart race with D3 and Svelte. The bar chart race will show the rate of growth in each GitHub repository’s stargazer count from April 2009 to the present day. By the end of this tutorial, you will have build the following bar chart race: To set up a new Svelte project with Vite and TypeScript , run the command npm init vite . Note : You may generate a new Svelte application with SvelteKit, but this tutorial is only focused on building out a single Svelte component for the bar chart race. Therefore, it’s more preferred to use a lighter template so that you don’t need to mess around with extra project files. Currently, you cannot query GitHub’s GraphQL API for a GitHub repository’s stargazer counts history. However, there’s an open source project that maintains records of repositories’ stargazer counts through the years: Star History. To get a CSV of historical stargazer counts for a GitHub repository, enter the both the username of the GitHub repository’s author and the name of the GitHub repository, delimited by a / . For example, facebook/react for React.js. Once you’ve clicked on the β€œView star history” button and waited for the chart to be generated, click on the CSV button to download this data into a CSV file. You can add more GitHub repositories to the chart so that the CSV will contain data for all of these GitHub repositories. For the bar chart race, we will be visualizing the historical stargazer counts for the following repositories: Once downloaded, rename the file as frontend-libraries-frameworks.csv and place it within the public/data directory. Since the data is incomplete, we will be interpolating stargazer counts for unknown dates. Additionally, from the dates, omit the day of week, the time and the time zone from the values of the second column (e.g., Thu Feb 11 2016 12:06:18 GMT-0500 (Eastern Standard Time) β†’ Feb 11 2016 ). At the top of the CSV, add a header row to label the columns: β€œname,date,value.” For the bar chart race, we will need to install five specific D3 modules: Run the following command to install these D3 modules and their type definitions in the Svelte project. First, delete the src/lib directory and src/app.css file. Then, in src/main.ts , omit the import './app.css' statement at the top of the file. In the src/App.svelte file, clear out the contents of the script, style and markup sections. Within the script section, let’s add the import statement for the <BarChartRace /> component and two variables: ( src/App.svelte ) Within the style section, let’s add some minor styles to horizontally center the <BarChartRace /> component in the <main /> element. ( src/App.svelte ) Note : Styles defined in the <App /> component won’t leak into other Svelte components. Within the <main /> element of the markup section, call the <BarChartRace /> component. Also, pass datasetUrl to the datasetUrl prop and maxBars to the maxBars prop of the <BarChartRace /> component, like so: ( src/App.svelte ) Then, create a types folder under the src directory. Within this folder, create an index.ts file and define and export two interfaces: Record and KeyframeRecord . ( types/index.ts ) We will annotate the records from the raw CSV dataset with Record , and we will annotate the records stored in a β€œkeyframe” (we will cover this later in this tutorial) with KeyframeRecord . Within the src directory, create a new folder named components . This folder will contain any reusable components used in this Svelte application. In this case, there will only be one component in this directory: BarChartRace.svelte . Create this file inside of the src/components directory. Within the src/components/BarChartRace.svelte file, begin with an empty script section for the <BarChartRace /> component: ( src/components/BarChartRace.svelte ) At the top of the script section, import several methods from the installed D3 modules: ( src/components/BarChartRace.svelte ) Then, declare the datasetUrl and maxBars props that the <BarChartRace /> component currently accepts. Additionally, locally declare three variables: ( src/components/BarChartRace.svelte ) d3-fetch comes with a convenient method for fetching and parsing CSV files: csv() . This method accepts, as arguments, a URL to a CSV dataset and a callback function that maps each row’s values to actual data values. All values in the CSV dataset are represented as strings. For the bar chart race, we need to parse value as a number and date as a Date object. To parse date as a Date object, create a parser by calling the timeParse() method with the structure of the stringified date (so that the parser understands how to parse the date string). Since date is formatted as <abbreviated month name> <zero-padded day of the month> <year with century> (e.g., Feb 11 2016 ), we pass the specifier string of "%b %d %Y" to the timeParse() method. In a Svelte component, we will need to use the onMount lifecycle method to fetch data after the component gets rendered to the DOM for the first time, like so: The bar chart race’s animation iterates over a series of keyframes. Each keyframe represents exactly one moment of the bar chart race; it contains data of the GitHub repositories’ stargazer counts at a given date. Because the source dataset from Star History doesn’t contain stargazer counts at every single date from April 2009 (the month of the earliest known stargazer count) to the present day, we will need to interpolate between data points (estimate stargazer counts for unknown dates) to guarantee that there’s enough keyframes to make the animation run smoothly. Every x milliseconds, we can update the animation with the data from the next keyframe until we run out of keyframes, at which point, the animation will stop at the current day stargazer counts for the GitHub repositories. To create these keyframes, we need to: Like with any D3 data visualization, you need to define its dimensions . Let’s define the bar chart race’s width, height and margins, like so: In the <BarChartRace /> component’s markup section, add an <svg /> element and set its width , height and viewBox using the values from dimensions . Within this <svg /> element, add a <g /> element that will group the <rect /> elements that will represent the bars. The x-scale maps a domain of stargazer counts to the horizontal dimensions of the bar chart race. You’re probably wondering why the maximum value of the domain is 1 despite our source dataset shows that the maximum stargazer count is 210,325. This domain serves as a placeholder for the x-scale’s domain. When we’re animating the bar chart race by iterating over the keyframes, we will adjust the x-scale’s domain based on the current keyframe’s data. This way, during the animation, the maximum stargazer count will always span the entire width ( dimensions.width - dimensions.margin.right ) of the bar chart race. On the other hand, the y-scale maps a domain of visible bar indices to the vertical dimension of the bar chart race. The domain specifies 1 more than the maximum number of visible bars since we want to be able to transition between the bottom-most visible bar and the hidden bar beneath it smoothly. Note : <number> corresponds to <Range> . This generic represents the data type of the domain values. Then, define a color scheme. Initialize it as a function that returns β€œ#FFFFFF.” This function will serve as a placeholder function until we actually fetch the CSV dataset, at which point, we can reassign the color scheme to map each GitHub repository to a specific color. Note : _d ensures that the function signature matches the function signature of the function that will override this placeholder function. In the onMount lifecycle method, after fetching the CSV dataset and creating a set of GitHub repository names from the source data, assign a new color scheme that assigns each GitHub repository name to a specific color, like so: To animate the bar chart race, first locally declare a variable keyframeItems at the top-level of the script section: keyframeItems will hold a keyframe’s list of the GitHub repositories and their stargazer counts and ranks. By reassigning this variable for each keyframe, Svelte’s reactivity will automatically update the bars’ widths and positions. Additionally, at the top-level of the script section, call the timeFormat() method with a string that describes how to format the date based on an input Date object. This way, the formatter knows what to output when given an input Date (e.g., β€œJul 2023”). In the onMount lifecycle method, once the keyframes have been created, set up a setInterval() function that… Note : In a future tutorial, I will show you how to re-implement this with requestAnimationFrame . Within the <g /> element in the markup section of the <BarChartRace /> component, use an each block to loop over keyframeItems and render a <rect /> for each visible bar. The items are keyed by the GitHub repositories’ names so that Svelte knows not to recreate the bars anytime keyframeItems gets updated and to just continue modifying properties of the existing bars. The in and out directives allow us to control the enter and exit animations of the bars. For example, in corresponds to an enter animation, and out corresponds to an exit animation. To keeps things simple, we’ll have the bar fade out when it exits and fade in when it enters. Finally, add an axis line to show the bars left-aligned and the ticker to the <svg /> element. When you run the project in development via npm run dev , you should see a bar chart race that looks like the following: Try customizing the bar chart race for your own historical count data. If you find yourself stuck at any point while working through this tutorial, then feel free to check out the project's GitHub repository or a live demo of this project in the following CodeSandbox: If you want to learn more about building visualizations with D3 and Svelte, then check out the Better Data Visualizations with Svelte course by Connor Rothschild, a partner and data visualization engineer at Moksha Data Studio.

Thumbnail Image of Tutorial Building a Bar Chart Race with D3 and Svelte

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Building a Word Cloud with D3 and Svelte

In this article, we will create a data visualization that displays the frequency of words in the lyrics of a song under the Billboard Hot 100 list, Vampire, by Olivia Rodrigo, using D3 and Svelte. Which words do you think catapult a song to the Billboard Hot 100 list?When repeated enough times, words become memorable. Anytime you listen to a speech, notice how frequently certain words come up, how the repetition helps you recognize the importance of the speaker’s message. If you happen to only have a transcript of the speech, then you would need to read/skim through paragraphs of text to grasp the essence of the speaker's words and gain a complete understanding of the message being conveyed. With word clouds (also known as tag clouds ), you can visualize the frequency of words. Words are arranged in a cloud-shaped formation, and each word is sized and colored based on its frequency (or importance) in a given text. The more frequently a word appears, the larger (or more intense color-wise) it appears in the word cloud. This makes it easier to visually identify critical keywords and themes in textual content. Simultaneously, word clouds capture and summarize the essence of textual content in a single glance. Whether you are interested in seeing what trending topics are being discussed in online communities or what words leaders use to inspire their nations, a word cloud offers a clear window into any textual content. There’s a D3 module that’s available for generating word clouds: d3-cloud . This module automatically takes a mapping of words and their frequencies and determines how to properly size and position them in a word cloud with minimal collisions. However, since the pure D3 implementation of a word cloud involves appending an SVG <text /> element, one by one, each time a word gets processed: What happens if we want to update the word cloud using another set of words? Rather than having to manually manage the DOM using D3’s imperative API (i.e., manually removing all of the previous SVG <text /> elements, re-appending new SVG <text /> elements, etc.), we can let Svelte render elements to the DOM and keep the DOM in sync with our data via reactivity . This way, anytime our data changes, Svelte automatically updates the DOM accordingly. In Svelte, all assignments are reactive. If we wanted to mark any number of top-level statements reactive, like the above code snippet, then all we have to do is wrap them in curly braces and prefix the block with the $ label syntax. This results in reactive statements . Any values within the reactive block become dependencies of the reactive statement. When any of these values change, the reactive statement gets re-run. This is perfect in case we want our word cloud to update anytime we provide a different set of words. Below, I'm going to show you how to build a word cloud with D3 and Svelte. The word cloud will display the frequency of words in the lyrics of of a song under the Billboard Hot 100 list, Vampire, by Olivia Rodrigo. The larger the word, and the less faded the word is, the greater the frequency of word in the lyrics. By the end of this tutorial, you will have built the following word cloud: To set up a new Svelte project with Vite and TypeScript , run the command npm init vite . Note : You may generate a new Svelte application with SvelteKit, but this tutorial is only focused on building out a single Svelte component for the word cloud. Therefore, it’s more preferred to use a lighter template so that you don’t need to mess around with extra project files. For the word cloud visualization, we will need to install two specific D3 modules: Run the following command to install these D3 modules and their type definitions in the Svelte project. First, delete the src/lib directory and src/app.css file. Then, in src/main.ts , omit the import './app.css' statement at the top of the file. In the src/App.svelte file, clear out the contents of the script, style and markup sections. Within the script section, let’s add the import statement for the <WordCloud /> component and a variable named lyrics that’s set to the lyrics of the song Vampire, like so: ( src/App.svelte ) Within the style section, let’s add some minor styles to horizontally center the <WordCloud /> component in the <main /> element. ( src/App.svelte ) Note : Styles defined in the <App /> component won’t leak into other Svelte components. Within the <main /> element of the markup section, call the <WordCloud /> component. Also, pass lyrics to the text prop of the <WordCloud /> component, like so: ( src/App.svelte ) Within the src directory, create a new folder named components . This folder will contain any reusable components used in this Svelte application. In this case, there will only be one component in this directory: WordCloud.svelte . Create this file inside of the src/components directory. Within the src/components/WordCloud.svelte file, begin with an empty script section for the <WordCloud /> component: ( src/components/WordCloud.svelte ) At the top of the script section, import d3Cloud from the d3-cloud module. d3Cloud instantiates a new cloud layout instance, and it comes with chainable methods for configuring: Additionally, import three methods from the d3-array module: ( src/components/WordCloud.svelte ) Then, declare the text prop that the <WordCloud /> component currently accepts. Set its default value to an empty string if no value is passed to the text prop. ( src/components/WordCloud.svelte ) d3Cloud comes with a chainable method called .words() . This method accepts the words and their frequencies as an array of objects with two properties: To turn the string of text into an array of objects with these properties, we’ll need to: Add these lines of code to the script section of the <WordCloud /> component, like so: ( src/components/WordCloud.svelte ) Like with any D3 data visualization, you need to define its dimensions. The dimensions consist of: In the <WordCloud /> component’s markup section, add an <svg /> element and set its width , height and viewBox using the values from dimensions . Since the words will be displayed using the Helvetica font family, let’s set font-family to β€œHelvetica.” Note : text-anchor="middle" aligns the middle of the text to the text’s position. This is important since the layout algorithm determines positions using the middle of the text as the reference. By default, the start of the text gets aligned to the text’s position. Next, define a wordPadding variable that specifies the numerical padding to apply to each word in the word cloud. Since d3-cloud internally uses an HTML5 <canvas /> element to simulate the layout algorithm, this padding (in pixels) gets multiplied by 2, and this product gets set to the lineWidth property of the canvas’s drawing context. For now, we’ll set wordPadding to 2. Add these lines of code to the script section of the <WordCloud /> component, like so: ( src/components/WordCloud.svelte ) With all of the necessary variables set, let’s call d3Cloud() and configure it using the following chainable methods: Anytime a word is successfully placed in the canvas that’s used to simulate the layout algorithm, push an object with the calculated font size ( size ), coordinates ( x and y ), rotation ( rotate ) and the word itself to an array named cloudWords . Once everything is set up, call the .start() method on cloud to run the layout algorithm. However, remember that Svelte’s reactivity only gets triggered on assignments. Since the .push() method mutates the array, we cannot use cloudWords to render the list of words in the markup section of the <WordCloud /> component. Therefore, once the layout algorithm finishes running, assign cloudWords to words . Then, within the <svg /> element in the markup section of the <WordCloud /> component, use an each block to loop over the list of words and render a list of <text /> elements inside of a <g /> element (for grouping the <text /> elements), like so: Add these lines of code to the script section of the <WordCloud /> component, like so: ( src/components/WordCloud.svelte ) When you run the project in development via npm run dev , you should see a word cloud that looks like the following: Currently, the size of a word communicates its frequency in a block of text. The larger the word, the more frequent the word appears in the block of text. However, what if we wanted to also communicate a word’s frequency based on the word’s opacity? For example, the more faded a word is in the word cloud, the less frequent it appears in the block of text. To do this, we’ll need to use the extent() method from the d3-array module to determine the maximum frequency. Then, by dividing a word’s frequency from the maximum frequency, we get decimal values that can be set to the word’s <text /> element’s opacity attribute, like so: Try customizing the word cloud for your own textual data. If you find yourself stuck at any point while working through this tutorial, then feel free to check out the live demo of this project in the following CodeSandbox: If you want to learn more about building visualizations with D3 and Svelte, then check out the Better Data Visualizations with Svelte course by Connor Rothschild, a partner and data visualization engineer at Moksha Data Studio.

Thumbnail Image of Tutorial Building a Word Cloud with D3 and Svelte

How to Use useCallback Hook with TypeScript

The useCallback hook returns a memoized callback that only changes if one of the dependencies has changed. This helps us to avoid unwanted and unnecessary components re-renders. The basic syntax for the useCallback hook is: You don't need any additional typings since TypeScript knows that useCallback accepts a function and an array of dependencies. It is preferable to use eslint-plugin-react-hooks though, to ensure you don't miss any argument in the dependencies array but that's optional. Let's assume we have a login form component: And then we have 2 input components: The problem is that when we change an email React will re-render both inputs instead of only EmailInput . This is happening because handler functions are being created on every re-render of the LoginForm component. So, on each render there are new handler functions for the InputEmail and PasswordEmail component, that's why they re-render each time. Let's fix that! Now, when we change an email or a password only the corresponding input component re-renders.

How to Use fetch with TypeScript

The main problem with fetch function is that it isn't a generic function . This makes it harder to type response data without writing additional wrappers. We can create a wrapper-module to avoid this problem. Let's create a function request that will handle network requests: Now we can call this function as we would call fetch : Or, since it is an async function we can use await : Right now the request function doesn't handle errors. To solve that we have 2 options: There are no obligatory rules for this but Single Responsibility Principle tells us not to mix those concerns in a single function. However, we will take a look at both ways. The first one is to handle errors inside of the request function: We can use catch to handle unresolved promises. There is a catch though (no pun intended). The first issue we can solve making the function async and using try-catch : However, this doesn't solve the second issue with not knowing how to handle the error. We can handle some basic network errors (Page Not Found, Bad Request) but we cannot handle any business-logic errors. With async/await approach, we can handle errors outside of the request function: We can also pre-define some basic request-methods, like get and post : We will use them like this:

How to Consume a GraphQL API in Next.js with urql and next-urql

In this article, we will learn how to use urql to consume a GraphQL API in a Next.js SSR app.urql is a lightweight, versatile and extensible GraphQL client for modern frontend apps, with support for React, Svelte, Vue and plain JavaScript.Β It was introduced as an alternative to existing GraphQL clients like Relay and Apollo . These GraphQL clients are largely similar when it comes to setup and use. In the next section, we will see how they compare with each other on various parameters. All three GraphQL clients provide standard features such as queries, mutations, caching and subscriptions, while only differing slightly in implementation. urql 's defaults for configuration are slightly better than those for Apollo, and it has a lower entry barrier thanks to its thorough documentation and native support for features otherwise only available via third-party plugins. Additionally, urql has a powerful caching mechanism offers both normalized and document caching via the @urql/exchange-graphcache package. urql is built on the principle of having a lightweight core, extensible via middleware called exchanges. This makes it the smallest in bundle size as compared to the other options. A full, detailed comparison of the clients by different categories of features can be found on the urql website . Next.js is one of the most popular React-based frameworks, and urql has first class native support for it via the next-urql package. Apollo and Relay do not have official plugins with support for Next.js which means the implementation might change between releases of the framework, and any app that uses it will have to be constantly maintained to keep up. With next-urql , most of the boilerplate involved in setting up urql for Server-Side Rendering (SSR) with Next.js is already done for you. It provides convenience functions such as the withUrqlClient HOC which enables your SSR pages to pre-fetch data via GraphQL queries. Next.js requires Node to be pre-installed on your system. You can then scaffold a Next.js TypeScript app using the following command in your terminal/command prompt. Once you have a skeleton app set up, you can install the dependencies required for urql . graphql is a peer dependency of urql and provides the underlying GraphQL implementation. No additional type definitions are required since urql is written in TypeScript. next-urql provides the Next.js bindings for urql . react-is is a peer dependency of next-urql , required for react-ssr-prepass to walk the component tree and pre-fetch any data required for rendering. We can use the withUrqlClient HOC to wrap our entire app in the urql context. This makes the urql client and hooks usable in the rest of our app. The first parameter to withUrqlClient is a function that returns a ClientOptions object. This can be used to pass configuration into the urql client instance, such as the API URL, custom fetch function, request policy and any additional middleware in the exchanges property. For this tutorial, we will use the GitHub GraphQL API. This API requires you to authenticate using a personal access token. You can follow the steps described here to create one after logging in with your GitHub account. We can then configure our urql client to pass the token as part of the authorization header on each request to the API. Now that we have our urql client set up, let us look at how we can use it to connect to the GitHub API and fetch some data. We will build a simple component that will display a list of repositories, with each item showing a link to the repo, its name, star count and commit count. The component will look somewhat like this. To start, let us look at the GraphQL query that should be used. This query fetches the first 10 repositories for the current user (determined from the personal access token). For each repository, it includes the name, ID, URL, stargazer count and the number of commits to the main branch. There are various other fields that can be added to the query, as documented in the API reference . We can execute this query using the useQuery hook from urql . Since we're using TypeScript, let us model the API response with the correct expected types and use them as type parameters to useQuery . The response object returned by useQuery returns a number of useful items, out of which we will currently use the fetching flag which tells us whether or not the operation is still in progress, and the data property which contains the fetched data when available. Let us now add some simple UI to render the returned data. This Repositories component now fetches and renders a list of repositories with star and commit counts. So far, we've seen how to set up urql in a Next.js app and use it to query the GitHub GraphQL API. Let's now take it a step further and learn how to create mutations - these are API operations that can cause the data to change in the backend. For the purposes of this tutorial, we will implement the creation of an issue within a given GitHub repository. The GraphQL mutation to create an issue looks like this: This mutation takes three variables - the repository ID to create the issue in, the title and the body of the issue. On success, it returns an Issue object that can contain the number, title and body. So let us model the request variables and response, and create the mutation. The useMutation hook returns a tuple with two items - an object that exposes the current state of the mutation request, and a function that can be invoked with input variables to execute the actual mutation. Let us adapt our Repositories component to be able to call this mutation. We'll refactor and extract some of the code into an individual Repository component along the way. This is what the refactored Repositories component will look like. All the GraphQL types have been moved to a separate types module. And the individual Repository component now renders the list item, along with a button that invokes the createIssue mutation when clicked. Clicking the button creates an issue with a sample fixed title and body in the corresponding repo. Every query or mutation in urql is modeled as an 'operation', and the system at any moment has a stream of operations issued by various parts of the app. Exchanges are pieces of middleware that transform the stream of operations into a stream of results. This is explained in more detail in the architecture documentation . Some of urql 's core features such as fetching data and caching are also handled via exchanges implemented by the urql team and provided by default. You can also create your own exchanges by implementing functions that conform to the rules defined here . Server-Side Rendered apps need to be set up to fetch data on the server-side and send it down to the client for hydration.Β  urql Β supports this via the ssrExchange . The SSR exchange has two functions - it gathers all the data as it is being fetched server-side, and using the serialized data on the client side to rehydrate the app without a refetch. When using next-urql , most of the boilerplate involved in instantiating the ssrExchange is already done for you. So if your client does not use any other exchanges, you do not explicitly need to instantiate the ssrExchange when creating the client. To enable SSR, you simply need to set the ssr flag in the second argument to the client configuration function. If you do want to add other exchanges to your client, they can be specified in an exchanges property returned by the configuration function. This function also gets the instance of the ssrExchange passed into it when called. Enabling SSR when wrapping the top level App component in withUrqlClient disables Next's ' Automatic Static Optimization ' which allows for hybrid apps with both server-side rendered and statically generated pages. If this is required in your app, you can wrap individual page components with withUrqlClient as required. When applying withUrqlClient to specific pages, we can also use getStaticProps or getServerSideProps to pre-fetch the data and populate the urql cache. This will render the page as a static page, further optimizing performance and allowing us to perform other operations in these functions. Let us adapt our app to use server-side rendering with getServerSideProps for our repositories component. We will add getServerSideProps to our Home page component, as this function can only be exported from page components. The getServerSideProps function gets called when the page is being rendered server-side. It will populate the cache so that the subsequent render of the Repositories component will hydrate it from cache when useQuery is called. In this article, we have learnt how to set up urql with a Next.js app and perform queries and mutations against the GitHub GraphQL API. Further, we have also learnt about the architecture of urql and how Exchanges work. We have used next-urql to implement server side rendering with pre-fetching of queries using the urql client and cache. In a subsequent tutorial, we will learn how to use urql exchanges for authentication and caching. All the code used in this article is available on my GitHub .

Thumbnail Image of Tutorial How to Consume a GraphQL API in Next.js with urql and next-urql

Custom Annotations on XY Line Charts with visx and React

In this article, we will learn how to build an XY line chart with custom annotations using visx and React.visx is a library built and maintained by the folks at AirBnB , that provides low-level visualization primitives for React. It is a thin wrapper around D3.js and is infinitely customizable for any of your data visualization needs. visx provides React components encapsulating D3 constructs, taking away some of the complexities and learning curve involved in working with D3. In this tutorial, we will learn how to use custom annotations to enrich and add context to your line charts using visx and React. We will be charting Apple Inc.’s (AAPL) stock price over the last ten years and overlaying it with annotations for different product launch dates. This will help us understand how the stock price was affected by various important launches in the company’s history. Let us start by creating a stock standard React TypeScript app using create-react-app . We can then install the @visx/xychart library which we need for this tutorial, along with date-fns which we will use for date manipulation. In this tutorial, we will use historical stock price data for Apple (AAPL) from Kaggle. I’ve transformed the raw CSV data into JSON and simplified it to have just two main properties per data point - the x property representing the date and the y property representing the closing stock price at that date. I have also curated an additional dataset containing dates for important Apple product launches and company events in the last ten years. This has been combined with the stock price data - some of the data points have an additional events property which describes the events that occurred around the time as an array of strings. The data can be found in the GitHub repo for this tutorial . Let us use the components from the @visx/xychart library that we installed earlier to create a simple plot using the first dataset from step 2. Let us take a closer look at the different components used in the chart: When the Chart component is instantiated in App.tsx , your app should look somewhat like this: Now that we have a basic chart up and running, we can use the additional data in the events properties to add custom annotations to the chart. This can be done using the Annotation component from @visx/xychart . labelXOffset and labelYOffset are pixel values that indicate how far away the annotation needs to be from the data point it is associated with - this prevents the annotation completely overlapping and obscuring the point in question. We've filtered out the data points from stockPrices that have the events property, and added an annotation for each one that has events. Each annotation has a label that displays the date and all the events for that date. The label is attached to the data point using an AnnotationConnector . With the annotations added, your chart should now look like this: The annotations help provide a better picture of the company over the years, and can offer possible explanations for the variations in share price (do note, however, that correlation does not necessarily imply causation πŸ˜‰). In this tutorial, we have used the example of Apple's share price variations to understand how to plot an XY chart with custom annotations with visx and React. There are a number of improvements that can be made to the chart, including: You can read more about the XY Chart in the official docs . As always, all the code used in this tutorial is available on GitHub .

Thumbnail Image of Tutorial Custom Annotations on XY Line Charts with visx and React

Static Site Generation with Next.js and TypeScript (Part V) - Build Time Access Tokens and Exporting Static HTML

Disclaimer - Please read the fourth part of this blog post here before proceeding. It demonstrates how to statically generate pages with dynamic routes using the getStaticPath() function. If you just want to jump straight into this tutorial, then clone the project repository and install the dependencies. In the previous part of this tutorial series, we encountered a big problem: each getStaticProps() and getStaticPath() function required us to obtain an access token before being able to request any data from the Petfinder API. This meant that anytime we built the Next.js application for production, we had to obtain several access tokens for the Petfinder API: If we were to add more statically generated pages to the Next.js application that depend on data from the Petfinder API, then we would continue to accumulate more access tokens that are scattered throughout the Next.js application. Unfortunately, the Next.js's custom <App /> component does not support data fetching functions like getStaticProps() and getStaticPath() . This means we don't have the option of obtaining a single access token, fetching all of the necessary data (e.g., a list of pet animal types and lists of recently adopted pets) in the getStaticProps() function of the custom <App /> component and passing the data as props to every page component at build time. One way to make the access token globally available to all page components at build time is to inject it as an environment variable. Below, I'm going to show you how to build a Next.js application with a single access token. We will obtain an access token from the Petfinder API via the cURL CLI tool, set it to an environment variable named PETFINDER_ACCESS_TOKEN and execute the npm run build command with this environment variable. Then, I'm going to show you how to export the Next.js application to static HTML. This allows us to deploy and serve the Next.js application on fast, static hosting solutions like Cloudflare Pages and GitHub Pages , all without ever having to spin up a Node.js server. To get started, clone the project repository and install the dependencies. If you're coming from the fourth part of this tutorial series, then you can continue from where the fourth part left off. Within the project directory, create a Makefile : With a Makefile, we can define rules that each run a set of commands. Rules are similar, purpose-wise, to npm scripts in package.json files. Each rule consists of, at a minimum, a target and a command. The target is the name of the rule, and the command is the actual command to execute. Inside of the Makefile , add two rules: dev and build . Note : Each indentation should be a tab that's four spaces wide. Otherwise, you may encounter the error *** missing separator. Stop. . Here, invoking the make command with the dev rule as the target ( make dev ) runs npm run dev , and invoking the make command with the build rule as the target ( make build ) runs npm run build . The Makefile allows us to store the result of shell commands into variables. For example, suppose we add the following line to the top of the Makefile . In the above example, we set the variable PETFINDER_ACCESS_TOKEN to the output of the echo command, which is the string "abcdef." The shell function performs command expansion, which means taking a command as an argument, running the command and returning the command's output. Once the shell function returns the command's output, we assign this output to the simply expanded variable PETFINDER_ACCESS_TOKEN . Anytime we reference a simply expanded variable, whose value is assigned with := , the variable gets evaluated once (at the time of assignment) and procedurally, much like what you would expect in a typical, imperative programming language like JavaScript. So if we were to reference the variable's value with $() , then the value will just be the string "abcdef." ( Makefile ) GNU make comes with another "flavor" of variable, recursively expanded variable , which evaluates a variable's value completely different than what most developers are used to. It's out of the scope of this tutorial, but you can read more about them here . If you print the value of the PETFINDER_ACCESS_TOKEN environment variable in the <HomePage /> page component's getStaticProps() function, then you will see the value "abcdef" logged in the terminal when you... ( pages/index.tsx ) Note : The PETFINDER_ACCESS_TOKEN environment variable's name will not be prefixed with NEXT_PUBLIC_ . Notice that the command (along with the value of environment variables passed to it), PETFINDER_ACCESS_TOKEN=abcdef npm run dev gets logged to the terminal. To tell make to suppress this echoing , you can prepend @ to lines that you want suppressed. For simple commands like echo , you can suppress the echoing by prepending @ to the command itself, like so: However, because the command we want suppressed begins with an environment variable, we wrap the entire command in @() , like so: ( Makefile ) When you re-run the make dev command, PETFINDER_ACCESS_TOKEN=abcdef npm run dev no longer gets logged to the terminal. To obtain an access token from the Petfinder API via cURL, you must send a request to the POST https://api.petfinder.com/v2/oauth2/token endpoint with the grant type ( grant_type ), client ID ( client_id ) and client secret ( client_secret ). This data can be passed by specifying a single -d option (short for --data ) as a concatenated string of key=value pairs (delimited with an ampersand) or multiple -d options, providing a key=value pair for each one. Here's what using the single -d option looks like: And here's what using the multiple -d options looks like: Here, we will use the single -d option. When you run the cURL command, you will see that the access token is returned in stringified JSON. We can pluck the access token from this stringified JSON by piping the output of the cURL command (the stringified JSON) to a sed command. On Unix-based machines, the sed command performs many types of text processing tasks, from search to substitution. The -E option (short for the --regexp-extended option) tells the sed command to find a substring based on an extended regular expression , which requires special characters to be escaped if you want to match for them as literal characters. When you run the cURL command with the piped sed command, you will see that only the access token is returned. Within the Makefile , let's set the PETFINDER_ACCESS_TOKEN variable to the cURL command with the piped sed command, like so: To pull the NEXT_PUBLIC_PETFINDER_CLIENT_ID , NEXT_PUBLIC_PETFINDER_CLIENT_SECRET and NEXT_PUBLIC_PETFINDER_API_URL environment variables from the .env file, we can use the include directive to pause reading from the current Makefile and read from the .env file before resuming. Then, with the export directive, we can export the environment variables that were read from the .env file. ( Makefile ) When you re-run the make dev command and visit http://localhost:3000/ in a browser, you will find that the access token is immediately available to the <HomePage /> page component's getStaticProps() function. Now we can remove all instances of fetching an access token within getStaticProps() and getStaticPath() functions from the Next.js application, and pass the PETFINDER_ACCESS_TOKEN environment variable to Authorization header of any request that's sent to the Petfinder API. Also, you can now remove the console.log({ PETFINDER_ACCESS_TOKEN }) line from the <HomePage /> page component's getStaticProps() function. ( pages/index.tsx ) ( pages/types/[type].tsx ) By passing the access token as an environment variable to the Next.js application, the Next.js application now makes ten fewer requests. Like with any solution, this approach does come with a caveat. Since an access token from the Petfinder API expires one hour from the time it was issued, a caveat of this approach is that you will have to reset the development server every hour to refresh the access token. To export the Next.js application to static HTML , we must add an export npm script to the package.json file that: ( package.json ) Add a new rule named export_html to the Makefile that runs the export npm script with the PETFINDER_ACCESS_TOKEN environment variable: ( Makefile ) Note : Remember, export is already a GNU make Β directive. Therefore, you cannot name the rule export . When you run the make export_html command, you will find that the Next.js application could not be exported as static HTML because it makes use of the image optimization feature, which requires a Node.js server. To resolve this problem, we need to set experimental.images.unoptimized to true in the Next.js configuration to disable the image optimization feature. Specifically, we only want to disable this feature when the NEXT_EXPORT environment variable is present. The NEXT_EXPORT environment variable will only be set when exporting the Next.js application to static HTML. ( Makefile ) ( next.config.js ) When you re-run the make export_html command, the Next.js application will be exported to static HTML. Inside the project directory, you will find the exported HTML in an out directory: To test the static pages, you can spin up a standalone HTTP server that serves the contents of the out directory: If you visit http://localhost:8080/types/horse in a browser and disable JavaScript, then you will see that this page has already been pre-rendered at build time. If you find yourself stuck at any point during this tutorial, then feel free to check out the project's repository for this part of the tutorial here . Proceed to the next part of this tutorial series to dive into building interactive, client-side features for the Next.js application. If you want to learn more advanced techniques with TypeScript, React and Next.js, then check out our Fullstack React with TypeScript Masterclass :

Thumbnail Image of Tutorial Static Site Generation with Next.js and TypeScript (Part V) - Build Time Access Tokens and Exporting Static HTML

Static Site Generation with Next.js and TypeScript (Part IV) - Dynamic Routes with getStaticPaths

Disclaimer - Please read the third part of this blog post here before proceeding. It walks through image optimization in a Next.js application with the <Image /> component of next/image and Blurhash, an algorithm that generates beautiful, lightweight, canvas-based placeholder images. If you just want to jump straight into this tutorial, then clone the project repository and install the dependencies. When you look at client-side routing solutions for single-page applications, such as React Router for React-based applications, you will find that they require you to define each route and manually map each one to a specific page component. With Next.js, you have a file-system based router that automatically maps each page component within the pages directory to a unique route. No extra code or routing library is needed. The route of a page is determined by the location of its page component within the pages directory. For example, if the page component <Blog /> is defined within the file pages/blog/index.tsx (or pages/blog.tsx ), then you can access it at the route /blog . The Next.js router can handle several different types of routes: If your Next.js application depends on external data to statically generate pages for dynamic routes, such as from a content management system (CMS) for blog articles, then you can export a getStaticPaths() function that specifies the paths to generate at build time, like so: Therefore, if this was placed in a pages/posts/[id].tsx file, then for each post from the JSONPlaceholder API, Next.js will pre-render a static page that can be accessed at the route /posts/:id (e.g., /posts/1 ). In the route, the named parameter id gets replaced with the value of a param with a matching name ( id: post.id ). Since the JSONPlaceholder API returns 100 posts, Next.js will pre-render 100 static pages and routes. With fallback set to false , any requests to /posts/:id outside of the range of pre-generated routes of /posts/1 to /posts/100 will return a 404 page. As for the content of the page, you will still need to export a getStaticProps() function to fetch any data that's needed to pre-render the page's content. Note : The getStaticPaths() function can be used only when the page component uses the getStaticProps() function. As for client-side navigation from one route to another, Next.js, like React Router, comes with a <Link /> component that lets users navigate to other application routes without the browser triggering a full-page reload. Below, I'm going to show you how to pre-generate paths for dynamic routes-based pages with the getStaticPaths() function. For each pet animal type, we will be adding a static page ( /types/:type ) that lists the most recently adopted pets. By the end of this tutorial, the application will gain eight more static pages, all accessible from the home page: To get started, clone the project repository and install the dependencies. If you're coming from the third part of this tutorial series, then you can continue on from where the third part left off. Within the project directory, let's install several dependencies: Since the Petfinder API returns a pet's description as stringified HTML markup, these two dependencies will help us clean up the HTML markup and prepare it so that it is safe to display to users. Let's install several dev. dependencies: According to the Petfinder API documentation , each pet returned by the API comes with photos of various sizes ( small , medium , large and full ) that are hosted on the photos.petfinder.com and dl5zpyw5k3jeb.cloudfront.net (likely subject to change in the future) domains. Therefore, we need to add the photos.petfinder.com and dl5zpyw5k3jeb.cloudfront.net domains to the list of whitelisted image domains in the Next.js configuration. In the case that the Petfinder API does not return an image for a pet, we will display a generic placeholder image, which will come from the via.placeholder.com domain. And so, we will also need to add the via.placeholder.com domain to the list of whitelisted image domains in the Next.js configuration: ( next.config.js ) First, let's create a types directory under the pages directory. Within this newly created directory, create a [type].tsx file. This file will contain a page component that will be served for the route /types/:type . The named parameter type will be replaced with slugified pet animal types, such as dog and small-furry . Within this file, let's define a page component named <TypePage /> , and export two functions, getStaticProps() and getStaticPaths() . ( pages/types/[type].tsx ) The getStaticPath() function will need to send a request to the GET /types endpoint of the Petfinder API to fetch the available pet animal types and generate a path for each one. The getStaticProps() function will need to send a request to three endpoints of the Petfinder API: And pass all of the returned data as props to the <TypePage /> component so that the component can render a list of the most recently adopted pets and a list of available breeds for the specific pet animal type. However, remember that for us to interact with the Petfinder API, we must first obtain an access token. This access token must be attached to the Authorization header of any subsequent request to the Petfinder API so that we have the necessary permissions for receiving data from the Petfinder API. The getStaticProps() function's context argument provides the values of the named parameters in the route via a params object. ( pages/types/[type].tsx ) Note : Unlike the getStaticProps() function that receives a context object as an argument, the getStaticPaths() function does not receive any argument. One thing you will notice is that if we decide to add more pages to this application, then anytime those pages require us to fetch data from the Petfinder API, we would have to obtain a new access token for each getStaticProps() / getStaticPaths() function. This is an incredibly wasteful, especially if we only need one access token to statically generate the pages at build time. Is there a way that we can obtain one access token, and use this one access token for fetching data from the Petfinder API across every getStaticProps() / getStaticPaths() function in a Next.js application? Unfortunately, Next.js's <App /> component does not support data-fetching functions like getStaticProps() (and by extension, getStaticPath() ) and getServerSideProps() , so we can't even consider obtaining an access token and passing data, such as a list of pet animal types, as props to every page component at build time. A possible solution is to leverage a Makefile that obtains an access token by using cURL , and set the access token as an environment variable so that the Next.js application can directly access the access token anywhere. We will explore this in the next part of this tutorial series. With all of the necessary data fetched from the Petfinder API, let's create two components: <AnimalCard /> and <AnimalCardsList /> . The <AnimalCard /> component will render a card that displays information about a pet, such as its name, breed, adoption status, contact information of the organization that's currently sheltering and caring for the pet, etc. The <AnimalCardsList /> component will render a list of <AnimalCard /> components in a single-column layout. Within the <TypePage /> page component, let's update the main heading with the pet type and render the <AnimalCardsList /> component under the second <section /> element. We set the animals prop of the <AnimalCardsList /> component to the list of recently adopted pets fetched from the Petfinder API ( adoptedAnimals ). ( pages/types/[type].tsx ) Now, inside of the components/AnimalCardsList.tsx file, let's define the <AnimalCardsList /> component: ( components/AnimalCardsList.tsx ) Finally, inside of the components/AnimalCard.tsx file, let's define the <AnimalCard /> component. Most of the icons used in the card come from heroicons . ( components/AnimalCard.tsx ) Let's test that everything works by spinning up the Next.js application in development mode. Inside of a browser, visit http://localhost:3000/types/horse . You will find that Next.js has successfully generated a page that belongs to the dynamic routes for /types/:type . When you generate the static pages at build time ( npm run build ), you will notice a static page for each pet animal type under the .next/server/pages/types directory. When you spin up the statically generated site in production mode ( npm run start ) and visit any of the static pages (e.g., http://localhost:3000/types/horse ), you will notice that no network requests are made to the Petfinder API and that the page has already been pre-rendered. To help with page navigation, let's add breadcrumbs to the top of the <TypePage /> component. Create a <Breadcrumbs /> component that takes a list of pages as props and renders a breadcrumb for each page. ( components/Breadcrumbs.tsx ) Then, add the <Breadcrumbs /> component to the <TypePage /> component, like so: ( pages/types/[type].tsx ) When you re-run the Next.js application, you will find the breadcrumbs at the top of the page. If you find yourself stuck at any point during this tutorial, then feel free to check out the project's repository for this part of the tutorial here . Proceed to the next part of this tutorial series to learn how to obtain a single access token at build time that can be used across every getStaticProps() / getStaticPath() function in a Next.js application. Plus, we will further flesh out the <TypePage /> component with client-side rendered content. If you want to learn more advanced techniques with TypeScript, React and Next.js, then check out our Fullstack React with TypeScript Masterclass .

Thumbnail Image of Tutorial Static Site Generation with Next.js and TypeScript (Part IV) - Dynamic Routes with getStaticPaths

Static Site Generation with Next.js and TypeScript (Part VI) - Client-Side Rendering

Disclaimer - Please read the fifth part of this blog post here before proceeding. It covers how to efficiently build a Next.js application with a single access token that can be used across all getStaticProps() and getStaticPath() functions. It also covers how to export a Next.js application to static HTML. If you just want to jump straight into this tutorial, then clone the project repository and install the dependencies. If all rendering happened on the client-side, then you end up with several problems. For example, suppose you build an application with Create React App . If you disable JavaScript in the browser and reload the application, then you will find the page void of content since React cannot run without JavaScript. Therefore, checking the DOM in the developer console, the <div id="root" /> element, where React renders all of the dynamic content, will be shown to be empty. There's also the possibility of the client running on an underpowered device, so rendering might take longer than expected. Or worse, the client has a poor network connection, so the application might have to wait longer for JavaScript bundles and other assets to be fully fetched before being able to render anything to the page. This is why it's important to not blindly render all content with only one rendering strategy. Rather, you should consider taking a hybrid approach when building an application. By having some content pre-rendered in advance via static-site generation (or server-side rendering) and having the remaining content rendered via client-side rendering, you can simultaneously deliver both a highly performant page and an enriching user experience. Thus far, every page of our Next.js application has been pre-rendered: the / and /types/:type pages. The initial HTML of the home page ( / ) contains the markup of the eight pet animal type cards that each allows users to navigate to a list of recently adopted pets. The initial HTML of each type page ( /types/:type ) contains the markup of the list of recently adopted pets. Once the page's initial HTML gets loaded and rendered, the browser hydrates the HTML. This breathes life into the page, giving it the ability to respond to user events and enabling features like client-side navigation and lazy-loading. Below, I'm going to show you how to render dynamic content on the client-side of a Next.js application with the useEffect Hook. When the user visits the /types/:type page, the list of pet animals available for adoption will be fetched and rendered on the client-side. By delegating the rendering of this list to the client-side, we reduce the likelihood of the list being outdated and including a pet that might have been adopted very recently. To get started, clone the project repository and install the dependencies. If you're coming from the fifth part of this tutorial series, then you can continue on from where the fifth part left off. Within the <TypePage /> component, let's define two state variables and their corresponding setter methods: ( pages/types/[type].tsx ) To fetch a list of adoptable pet animals, let's call a useEffect Hook that fetches this list upon the <TypePage /> component mounting to the DOM. The type prop is passed to the dependency array of this Hook because its id property is accessed within the Hook. ( pages/types/[type].tsx ) When you run the Next.js application in development mode ( make dev ) and visit the page http://localhost:3000/types/horse , you will encounter the following error message: Since the PETFINDER_ACCESS_TOKEN environment variable is not prefixed with NEXT_PUBLIC , the PETFINDER_ACCESS_TOKEN environment variable is not exposed to the browser. Therefore, we will need to fetch a new access token for sending requests to the Petfinder API on the client-side. ( pages/types/[type].tsx ) To communicate to users that the list of adoptable pet animals is being fetched from the Petfinder API, let's add a <Loader /> component and show it to the user anytime the page is updating this list. To the right of the animated spinner icon, the <Loader /> component shows a default "Loading ..." message. With the className and children props, you can override the <Loader /> component's styles and the default "Loading..." message respectively. ( components/LoadingSpinner.tsx ) Now let's add a new section to the <TypePage /> component that displays the list of adoptable pet animals. Additionally, let's update the <TypePage /> component accordingly so that the <Loader /> component is shown when the isUpdatingAdoptableListings flag is set to true . ( pages/types/[type].tsx ) When the Next.js application refreshes with these changes, fully reload the page. The list of recently adopted pet animals (above-the-fold content) instantly gets rendered since it was pre-rendered at build time. If you quickly scroll down the page, you can momentarily see the loader just before it disappears and watch as the client renders the fetched list of adoptable pet animals (below-the-fold content). Suppose the page is kept open for some time. The initial list of adoptable pet animals will become outdated. Let's give the user the ability to manually update this list by clicking an "Update Listings" button. ( components/UpdateButton.tsx ) When the user clicks on the button, and the handleOnClick method sets the isUpdating flag to true ... Within the <TypePage /> component, let's move the fetchAdoptableAnimals function out of the useEffect Hook so that it can also be called within the updateAdoptableListing() function, which gets passed to the handleOnClick prop of the <UpdateButton /> component. By wrapping the updateAdoptableListing() function in a useCallback Hook, the function is memoized and only gets recreated when the prop type changes, which should never change at any point during the lifetime of the <TypePage /> component. Therefore, anytime a state variable like adoptableAnimals or isUpdatingAdoptableListings gets updated and causes a re-render of the component, the fetchAdoptableAnimals() function will not be recreated. ( pages/types/[type].tsx ) Upon the initial page load, a simple animated spinner icon with the text "Loading..." is shown to the user as the client fetches for a list of adoptable pet animals. Then, anytime the user clicks on the "Update Listings" button to update the list of adoptable pet animals, an overlay with an animated spinner icon gets placed on top of the previous listings. This preserves the previous listings in case the client cannot update the list of adoptable pet animals (due to a network error, etc.). Let's make a few adjustments to the <TypePage /> component to ensure that the previous listings are kept in state when the client fails to update the list of adoptable pet animals. We add another state variable isUpdateFailed that's set to true only if an error was encountered when fetching an updated list of adoptable pet animals. When isUpdateFailed set to true , a generic error message "Uh Oh! We could not update the listings at this time. Please try again." is shown to the user. ( pages/types/[type].tsx ) If you find yourself stuck at any point during this tutorial, then feel free to check out the project's repository for this part of the tutorial here . Please stay tuned for future parts, which will cover topics like API routes and deployment of the Next.js application to Vercel! If you want to learn more advanced techniques with TypeScript, React and Next.js, then check out our Fullstack React with TypeScript Masterclass :

Thumbnail Image of Tutorial Static Site Generation with Next.js and TypeScript (Part VI) - Client-Side Rendering

Static Site Generation with Next.js and TypeScript (Part III) - Optimizing Image Loading with Plaiceholder and BlurHash

Disclaimer - Please read the second part of this blog post here before proceeding. It explains the different data-fetching techniques that Next.js supports, and it guides you through the process of statically rendering a Next.js application page that fetches data at build time via the getStaticProps function. If you just want to jump straight into this tutorial, then clone the project repository and install the dependencies. Slow page loading times hurt the user experience. Anytime a user waits longer than a few seconds for a page's content to appear, they usually lose their patience and close out the page in frustration. A significant contributor to slow page loading times is image sizes. The larger an image is, the longer it takes the browser to download and render it to the page. One way to improve the perceived load time of images (and by extension, page) is to initially show a placeholder image to the user. This image should occupy the same space as the intended image to prevent cumulative layout shifting . Additionally, compared to the intended image, this image should be much smaller in size (at most, several KBs) so that it loads instantaneously (within the window of the page's first contentful paint). The placeholder image can be as simple as a single, solid color (e.g., Google Images or Medium) or as advanced as a blurred representation of the image (e.g., Unsplash). For Next.js application's, the <Image /> component from next/image augments the <img /> HTML element by automatically handling image optimizations, such as... These optimizations make the page's content immediately available to users to interact with. As a result, they help to improve not only a page's Core Web Vitals , but also the page's SEO ranking and user experience. Next.js <Image /> components support blurred placeholder images. To tell Next.js to load an image with a blurred placeholder image, you can either... If the Next.js application happens to load images from an external service like Unsplash, then for each image, would we need to manually create a blurred placeholder image, convert the blurred placeholder image into a Data URL and pass the Data URL to the blurDataUrl prop? With the Plaiceholder library, we can transform any image into a blurred placeholder image, regardless of where the image is hosted. The blurred placeholder image is a very low resolution version of the original image, and it can be embedded within the page via several methods: CSS, SVG, Base64 and Blurhash. Blurhash is an algorithm that encodes a blurred representation of an image as a small, compact string. Decoding this string yields the pixels that make up the blurred placeholder image. Using these pixels, you can render the blurred placeholder image within a <canvas /> element. Blurhash allows you to render a blurred placeholder image without ever having to send a network request to download it. Below, I'm going to show you how to improve image loading in a Next.js application with the Plaiceholder library and Blurhash. To get started, clone the project repository and install the dependencies. If you're coming from the second part of this tutorial series, then you can continue on from where the second part left off. Within the project directory, let's install several dependencies: Then, wrap the Next.js configuration with withPlaiceholder to extend the Next.js Webpack configuration so that Webpack excludes the sharp image processing library from the output bundle. ( next.config.js ) Specifying images.domains in the Next.js configuration restricts the domains that remote images can come from. This protects the application from malicious domains while still allowing Next.js to apply image optimizations to remote images. Currently, the images on the home page are high resolution and take a lot of time to be fully downloaded. In fact, without any image processing, some of the raw Unsplash images are as large as 20 MBs in size. However, by replacing these images with more optimized images, we can significantly improve the performance of the Next.js application. Using the <Image /> component from next/image and the Plaiceholder library, the home page will... These changes should shrink the bars in the developer console's network waterfall chart. To generate the blurred placeholder images, let's first modify the <HomePage /> component's getStaticProps() function so that these images get generated at build time. For each pet animal type, call the getPlaiceholder() method of the Plaiceholder library with one argument: a string that references the image. This string can be a relative path or an absolute URL. Here, the string will be an absolute URL since the images come directly from Unsplash. From the getPlaiceholder() method, we can destructure out any of the following: For the Next.js application, we will destructure out blurhash for a Blurhash-based blurred placeholder image and img for information about the actual image. ( pages/index.tsx ) Note #1 : The getPlaiceholder() method can accept a second, optional argument that lets you override the default placeholder size, configure the underlying Sharp library that Plaiceholder uses for image processing, etc. Check the Plaiceholder documentation here for more information. Note #2 : If you append the query parameters fm=blurhash&w=32 to the URL of an Unsplash image, then the response returned will be a Blurhash encoded string for that Unsplash image. Then, we need to update the shared/interfaces/petfinder.interface.ts file to account for the newly added properties ( blurhash and img ) on the AnimalType interface. ( shared/interfaces/petfinder.interface.ts ) After updating the shared/interfaces/petfinder.interface.ts file, let's replace the image that's set as a CSS background image in the <TypeCard /> component with two components: Run the Next.js application in development mode: When you visit the Next.js application in a browser, you will notice that the blurred placeholder images immediately appear. However, when the optimized images are loaded, you will notice that some of them are not aligned correctly within their parent containers: To fix this, let's use some of the <Image /> component's advanced props to instruct how an image should fit (via the objectFit prop) and be positioned (via the objectPosition prop) within its parent container. For these props to work, you must first set the <Image /> component's layout prop to fill so that the image can grow within its parent container (in both the x and y axes). The objectFit and objectPosition props correspond to the CSS properties object-fit and object-position . For objectFit , set it to cover so that the image occupies the entirety of the parent container's space while maintaining the image's aspect ratio. The outer portions of the image that fail to fit within the parent container will automatically be clipped out. For objectPosition , let's assign each pet animal type a unique objectPosition that positions the pet animal within the image to the center of the parent container. Any pet animal type that is not assigned a unique objectPosition will by default have objectPosition set to center . ( enums/index.ts ) ( pages/index.tsx ) ( shared/interfaces/petfinder.interface.ts ) When you re-run the Next.js application, you will notice that the images are now correctly aligned within their parent containers. When you check the terminal, you will encounter the following warning message being logged: Since we specified the layout prop as fill for the <Image /> component within the <TypeCard /> component, we tell the <Image /> component to stretch the image until it fills the parent element. Therefore, the dimensions of the image don't have to be specified, which means the height and width props should be omitted. Let's make the following changes to the <TypeCard /> component in components/TypeCard.tsx : Unsplash leverages Imgix to quickly process and deliver images to end users via a URL-based API . Based on the query parameters that you append to an Unsplash image's URL, you can apply various transformations to the image, such as face detection , focal point cropping and resizing . Given the enormous base dimensions of the Next.js application's Unsplash images (e.g., the dog image is 5184px x 3456px), we can resize these images to smaller sizes so that Plaiceholder can fetch them faster. Since the images occupy, at most, a parent container that's 160px x 160px, we can resize the initial Unsplash images so that they are, at most, 320px in width (and the height will be resized proportionally to this width based on the image's aspect ratio). Let's append the query parameter w=320 to the Unsplash image URLs in enums/index.ts . ( enums/index.ts ) When you re-run the Next.js application, you will notice that the Next.js application runs much faster now that Plaiceholder doesn't have to wait seconds to successfully fetch large Unsplash images. If you find yourself stuck at any point during this tutorial, then feel free to check out the project's repository for this part of the tutorial here . Proceed to the next part of this tutorial series to learn how to create pages for dynamic routes in Next.js. If you want to learn more advanced techniques with TypeScript, React and Next.js, then check out our Fullstack React with TypeScript Masterclass .

Thumbnail Image of Tutorial Static Site Generation with Next.js and TypeScript (Part III) - Optimizing Image Loading with Plaiceholder and BlurHash

Static Site Generation with Next.js and TypeScript (Part I) - Project Overview

Many of today's most popular web applications, such as G-Mail and Netflix, are single-page applications (SPAs). Single-page applications deliver highly engaging and exceptional user experiences by dynamically rendering content without fully reloading whole pages. However, because single-page applications generate content via client-side rendering, the content might not be completely rendered by the time a search engine (or bot) finishes crawling and indexing the page. When it reaches your application, a search engine will read the empty HTML shell (e.g., the HTML contains a <div id="root" /> in React) that most single-page applications start off with. For a smaller, client-side rendered application with fewer and smaller assets and data requirements, the application might have all the content rendered just in time for a search engine to crawl and index it. On the other hand, for a larger, client-side rendered application with many and larger assets and data requirements, the application needs a lot more time to download (and parse) all of these assets and fetch data from multiple API endpoints before rendering the content to the HTML shell. By then, the search engine might have already processed the page, regardless of the content's rendering status, and moved on to the next page. For sites that depend on being ranked at the top of a search engine's search results, such as news/media/blogging sites, the performance penalties and slower first contentful paint of client-side rendering may lower a site's ranking. This results in less traffic and business. Such sites should not client-side render entire pages worth of content, especially when the content infrequently (i.e., due to corrections or redactions) or never changes. Instead, these sites should serve the content already pre-generated as plain HTML. A common strategy for pre-generating content is static site generation . This strategy involves generating the content in advance (at build time) so that it is part of the initial HTML document sent back to the user's browser when the user first lands on the site. By exporting the application to static HTML, the content is created just once and reused on every request to the page. With the content made readily available in static HTML files, the client has much less work to perform. Similar to other static assets, these files can be cached and served by a CDN for quicker loading times. Once the browser loads the page, the content gets hydrated and maintains the same level of interactivity as if it was client-side rendered. Unlike Create React App , popular React frameworks like Gatsby and Next.js have first-class, built-in static site generation support for React applications. With the recent release of Next.js v12, Next.js applications build much faster with the new Rust compiler (compared to Babel, this compiler is 17x faster). Not only that, Next.js now lets you run code on incoming requests via middleware, and its APIs are compatible with React v18. In this multi-part tutorial, I'm going to show you how to... We will be building a simple, statically generated application that uses the Petfinder API to display pets available for adoption and recently adopted. All of the site's content will be pre-rendered in advance with the exception of pets available for adoption, which the user can update on the client-side. Home Page ( / ): Listings for Pet Animal Type ( /types/<type> ): Visit the live demo here: https://petfinder-nextjs.vercel.app/ To get started, initialize the project by creating its directory and package.json file. Note : If you want to skip these steps, then run the command npx create-next-app@latest --ts to automatically scafford a Next.js project with TypeScript. Then, proceed to the next section of this tutorial. Install the following dependencies and dev. dependencies: Add a .prettierrc file with an empty configuration object to accept Prettier's default settings. Add the following npm scripts to the package.json file: Here's what each script does: At the root of the project directory, create an empty TypeScript configuration file ( tsconfig.json ). By running next , Next.js automatically updates the empty tsconfig.json file with Next.js's default TypeScript configuration. ( tsconfig.json ) Note : If you come across the error message Error: > Couldn't find a `pages` directory. Please create one under the project root , then at the root of the project directory, create a new pages directory. Then, re-run the npm run dev command. Note : Double-check the configuration, and make sure that the moduleResolution compiler option is not missing and set to node . Otherwise, you will encounter the TypeScript error Cannot find module 'next' . Currently, this approach generates a TypeScript configuration that has the strict compiler option set to false . If you bootstrapped the project via the create-next-app CLI tool ( --ts / --typescript option), then the strict compiler option will be set to true . Let's set the strict compiler option to true so that TypeScript enforces stricter rules for type-checking. ( tsconfig.json ) Note : Setting strict to true automatically enables the following seven type-checking compiler options: noImplicitAny , noImplicitThis , alwaysStrict , strictBindCallApply , strictNullChecks , strictFunctionTypes and strictPropertyInitialization . Additionally, this command auto-generates a next-env.d.ts file at the root of the project directory. This file guarantees Next.js types are loaded by the TypeScript compiler. ( next-env.d.ts ) To further configure Next.js, create a next.config.js file at the root of the project directory. This file allows you to override some of Next.js's default configurations, such as the project's base Webpack configurations and mapping between incoming request paths and destination paths. For now, let's just opt-in to React's Strict Mode to spot out any potential problems, such as legacy API usage and unsafe lifecycles, in the application during development. ( next.config.js ) Similar to the tsconfig.json file, running next lint automatically installs the eslint and eslint-config-next development dependencies. Plus, it creates a new .eslintrc.json file with Next.js's default ESLint configuration. Note : When asked "How would you like to configure ESLint?" by the CLI, select the "Strict" option. ( eslintrc.json ) This application will be styled with utility CSS rules from the Tailwind CSS framework . If you are not concerned with how the application is styled, then you don't have to set up Tailwind CSS for the Next.js application and can proceed to the next section. Otherwise, follow the directions here to properly integrate in Tailwind CSS. To register for a Petfinder account, visit the Petfinder for Developers and click on "Sign Up" in the navigation bar. Follow the registration directions. Upon creating an account, go to https://www.petfinder.com/developers/ and click on the "Get an API Key" button. The form will prompt you for two pieces of information: "Application Name" and "Application URL" (at the minimum). For "Application Name," you can enter anything as the application name (e.g., "find-a-cute-pet" ). For "Application URL," you can enter https://<application_name>.vercel.app since the application will be deployed on Vercel by the end of this tutorial series. To see if an https://<application_name>.vercel.app URL is available, visit the URL in a browser. If Vercel returns a 404: NOT_FOUND page with the message <application_name>.vercel.app might be available. Click here to learn how to assign it to a project. , then the application name is likely available and can be used for completing the form. You can find the API key (passed as the client ID in the request payload to the POST https://api.petfinder.com/v2/oauth2/token endpoint) and secret (passed as the client secret in the request payload to the POST https://api.petfinder.com/v2/oauth2/token endpoint) under your account's developer settings. Here, you can track your API usage. Each account comes with a limit of 1000 daily requests and 50 requests per second. At the root of the project directory, create a .env file with the following environment variables: ( .env ) Replace the Xs with your account's unique client ID and secret. The home page features a grid of cards. Each card represents a pet animal type catalogued by the Petfinder API: These cards lead to pages that contain listings of pets recently adopted and available for adoption, along with a list of breeds associated with the pet animal type (e.g., Shiba Inu and Golden Retriever for dogs). Suppose you have to build this page with client-side rendering only. To fetch the types of animal pet from the Petfinder API, you must: Initially, upon visiting the page, the user would be presented with a loader as the client... Having to wait on the API to process these two requests before any content is shown on the page only adds to a user's frustrations. Wait times may even be worst if the API happens to be experiencing downtime or dealing with lots of traffic. You could store the access token in a cookie to avoid sending a request for a new access token each time the user loads the page. Still, you are left with sending a request for a list of types each time the user loads the page. Note : For stronger security (i.e., mitigate cross-site scripting by protecting the cookie from malicious JavaScript code), you would need a proxy backend system that interacts with the Petfinder API and sets an HttpOnly cookie with the access token on the client's browser after obtaining the token from the API. More on this later. This page serves as a perfect example for using static site generation over client-side rendering. The types returned from the API will very rarely change, so fetching the same data for each user is repetitive and unnecessary. Rather, just fetch this data once from the API, build the page using this data and serve up the content immediately. This way, the user does not have to wait on any outstanding requests to the API (since no requests will be sent) and can instantly engage with the content. With Next.js, we will leverage the getStaticProps function, which runs at build time on the server-side. Inside this function, we fetch data from the API and pass the data to the page component as props so that Next.js pre-renders the page at build time using the data returned by getStaticProps . Note : In development mode ( npm run dev ), getStaticProps gets invoked on every request. Previously, we created a pages directory. This directory contains all of the Next.js application's page components. Next.js's file-system based router maps page components to routes. For example, pages/index.tsx maps to / , pages/types/index.tsx maps to /types and pages/types/[type.tsx] maps to types/:type ( :type is a URL parameter). Now let's create four more directories: The Petfinder API documentation provides example responses for each of its endpoint. With these responses, we can define interfaces for the responses from the following endpoints: Create an interfaces directory within the shared directory. Inside of the interfaces directory, create a petfinder.interface.ts file. ( shared/interfaces/petfinder.interface.ts ) Note : This tutorial skips over endpoints related to organizations. Inside of the pages directory, create an index.tsx file, which corresponds to the home page at / . To build out the home page, we must first define the <HomePage /> page component's structure. ( pages/index.tsx ) Then, create the <TypeCardsGrid /> component, which renders a grid of cards (each represents a pet animal type). This component places the cards in a... ( components/TypeCardsGrid.tsx ) Then, create the <TypeCard /> component, which renders a card that represents a pet animal type. This component shows a generic picture of the pet animal type and a link to browse listings (recently adopted and available for adoption) of pet animals of this specific type. Note : The types returned from the Petfinder API do not have an id property, which serves as both a unique identifier and a URL slug (e.g., the ID of type "Small & Furry" is "small-furry"). In the next section, we will use the type name to create an ID. ( components/TypeCard.tsx ) Since the Petfinder API does not include an image for each pet animal type, we can define an enumeration ANIMAL_TYPES that supplements the data returned from the API with an Unsplash stock image for each pet animal type. To account for the images' different dimensions in the <AnimalCard /> component, we display the image as a background cover image of a <div /> and position the image such that the animal in the image appears in the center of a 10rem x 10rem circular mask. ( .enums/index.ts ) For users to browse the pet animals returned from the Petfinder API for a specific pet animal type, they can click on a card's "Browse Listings" link. The link is wrapped with a Next.js <Link /> component (from next/link ) to enable client-side navigation to the listings page. This kind of behavior can be found in single-page applications. When a user clicks on a card's "Browse Listings" link, the browser will render the listings page without having to reload the entire page. The <Link /> component wraps around an <a /> element, and the href gets passed to the component instead of the <a /> element. When built, the generated markup ends up being just the <a /> element, but having an href attribute and having the same behavior as a <Link /> component. To apply TailwindCSS styles throughout the entire application, we need to override the default <App /> component that Next.js uses to initialize pages. Next.js wraps every page component with the <App /> component. This is also useful for other reasons like... To override the default <App /> component, create an _app.tsx file within the pages directory. Inside of this file, import the styles/globals.css file that contains TailwindCSS directives, and define an <App /> component, like so: ( pages/_app.tsx ) The <App /> component takes two props: To verify that the homepage works and that the TailwindCSS styles have been applied correctly, let's run the application: Within a browser, visit localhost:3000 . Currently, the <TypeCardsGrid /> is not rendered since we have not yet fetched any pet animal types from the Petfinder API. If you find yourself stuck at any point during this tutorial, then feel free to check out the project's repository for this part of the tutorial here . Proceed to the next part of this tutorial series to learn how to fetch this data from the Petfinder API with the getStaticProps function. If you want to learn more advanced techniques with TypeScript, React and Next.js, then check out our Fullstack React with TypeScript Masterclass :

Thumbnail Image of Tutorial Static Site Generation with Next.js and TypeScript (Part I) - Project Overview

How to Use Thunks with Redux Toolkit and TypeScript

Last time we used Redux Toolkit to create a todo app. Since RTK is considered to be the standard way to write redux applications, we will use thunks with Redux Toolkit to write asynchronous logic as well. Thunks are a way to manage side effects when working with Redux. For instance, when we want to fetch some data from a server we need to perform an HTTP request. That request is a side effect because it interacts with the β€œouter world”. Redux itself doesn't know anything about side effects in an app. All the work is done in the redux-thunk middleware. It extends the store's abilities and lets you write async logic that interacts with the store. Redux Toolkit's configureStore function automatically sets up the thunk middleware by default. Right now we have a todo app that works synchronously, so we can only use the runtime storage to store the user data. In real-life applications, we would need to store user's data in some kind of persistent storage, on a server for example. Let's change our app a bit and, as an example, create a method to load todos from the server and show them. First of all, we're going to slightly refactor our previous code. We will extract our useTypedSelector from features/todos/TodoList.tsx to app/store.ts : ...This will help us to re-use this function in another component later. Also, we will extract some todos types into features/todos/types.ts : ...Which will help us to cover the API requests with types. As a backend, we will use JSON-placeholder . It allows us to fetch a list of todo-objects that we can render in our UI. The endpoint we use returns a list of todo-objects RTK provides a function called createAsyncThunk for creating thunks. Let's start with creating one: Let's write the real request logic using our placeholder backend: Cool, now it's working. However, the result we return is not typed. TypeScript yet doesn't know about the structure of data we return. For typing the result , we're going to need Todo type we extracted earlier: Okay, now our fetchTodos function doesn't take any argument. But let's think about a situation when we need to control the amount of fetched todos. We could use a function argument for that: The only thing left to cover is errors. Imagine that the server responded with a status 400. In this case, we would need to tell our users that todos aren't loaded and show the error message. Now, let's add our thunk to todosSlice . For that, we need to slightly change our TodosState : After it's done, let's define a selector for getting the status value: Finally, let's add reducers for handling our fetchTodos actions: The last thing to do is to use the fetchTodos thunk inside a component. Let's create a button, click on which will fetch a list of todos from a server. Now we can use this component to load todos from the server.

Thumbnail Image of Tutorial How to Use Thunks with Redux Toolkit and TypeScript

How to Use Redux Toolkit with TypeScript

The Redux Toolkit package is intended to be the standard way to write Redux logic. It simplifies the store creation and decreases the amount of boilerplate code. Let's rewrite the application we wrote in one of the earlier posts from scratch to see what will change. First of all, we need to create a new app. We can use Create React App with TypeScript template and then add Redux Toolkit or we can use a template that already includes everything we need. Now, if you look in the redux-toolkit-app directory you will see not only App.tsx but also 2 new directories: For now, let's clear the features directory, we won't need anything inside of it. Redux Toolkit introduces a new concept called a slice . It is an object that contains a reducer function as a field named reducer , and action creators inside an object called actions . Basically, a slice is a store part responsible for a single feature. We may think of it as a set of actions, a reducer, and an initial state. Slices allow us to write less code and keep feature-related code closer thus increasing cohesion . For our first slice, we're going to use a createSlice function. Let's start with re-creating typings for our store: Then, we create an initial state: And now, we can start creating a slice: The todosSlice object now contains actions and reducer fields. Let's export everything they have from the module: To use the state in our components, we need to create a state selector. For that, we need to define a RootState type. Let's change the app/store.ts a bit: Now return to todosSlice.ts and create a selector: The code of our component will be almost the same, except for imports and selector usage. Let's review it: And the list: We gain some advantages using RTK, such as: There are however disadvantages as well:

What Happened to the FCΒ Type's Implicit childrenΒ Prop in @types/reactΒ v18?

In @types/react v18, the implicit children prop was removed from the FC type. According to a pull request for @types/react , this prop was supposed to be removed in @types/react v17, but was postponed to @types/react v18 so that developers can easily upgrade their React applications to v17 with few to zero problems . Assuming React v18 and @types/react v18 are both installed within a React project, and given the following code for a component that accept children as a prop... TypeScript raises the following error: The implicit children prop from the FC type ended up being removed for consistency; TypeScript should always reject excess props. Therefore, to resolve this error, manually (and explicitly) define the children prop in the component's props interface, like so: Here's a CodeSandbox demo that demonstrates both the problem and solution: https://codesandbox.io/embed/removal-implicit-children-test-xdfdpl?fontsize=14&hidenavigation=1&theme=dark Note : You may notice that PropsWithChildren still exists within the type definition file, and you may be tempted to use it to resolve this error. However, this prop is kept around for backwards compatibility purposes and for a specific codemod . If you want to learn more advanced techniques with TypeScript and React, then check out our Fullstack React with TypeScript Masterclass :

Thumbnail Image of Tutorial What Happened to the FCΒ Type's Implicit childrenΒ Prop in @types/reactΒ v18?

Find and RegExp Match - How to Fix Object is possibly 'undefined'Β and Object is possibly 'null'Β Errors in TypeScript

Consider the following TypeScript code snippet: Let's assume that the list of cars is fetched from a remote API service. In this TypeScript code snippet, we... However, there are two problems with this TypeScript code snippet: As a result, TypeScript gives you the warnings Object is possibly 'undefined' for the find() method and Object is possibly 'null' for the subsequently chained match() method. To keep method chaining possible, we need to provide default values that ensures chained methods never get called on undefined and null values. Here's what this solution looks like: This solution may look compact and will save you a few bytes, but at the same time, you sacrifice readability. This can be taxing for less experienced developers who are unfamiliar with some of this syntax. By breaking up the method chaining into multiple statements, developers can better understand what exactly is happening with the code, step-by-step. Additionally, you can add more concrete checks to ensure that the code is even more type-safe. Here's what this solution looks like: If you want to learn more advanced techniques with TypeScript and React, then check out our Fullstack React with TypeScript Masterclass :

Thumbnail Image of Tutorial Find and RegExp Match - How to Fix Object is possibly 'undefined'Β and Object is possibly 'null'Β Errors in TypeScript

Fullstack React with TypeScript Masterclass is LIVE πŸŽ‰

The Fullstack React with TypeScript Masterclass is now live! πŸŽ‰Β Β  This Masterclass teaches you practical React and TypeScript for developing apps from idea to completion, along with all the important tools in the React ecosystem. It expands on the material taught in our comprehensive book,Β  Fullstack React with TypeScript , and gives you over 10 hours of video lessons taught by Maksim Ivanov. By the end of the first module, you'll already have created your environment for React with TypeScript, and you will have completed basic tasks with TypeScript. The subsequent modules then continue your journey through building multiple apps and learning techniques including: This masterclass was developed by Maksim Ivanov and Alex Bespoyasov and taught by Maksim. Maksim worked for leading game developer Mojang, where he helped develop front-end interfaces with React and TypeScript. He has continued his front-end work at Spotify, where he develops interfaces with React, TypeScript, and related tools. Β Alex is a frontend developer and works with technology company 0+X where he consults on developing and maintaining applications with React and related tooling. Β With their combined depth of expertise, Maksim and Alex will quickly get you up to speed on creating modern React and TypeScript apps that your users will love. You can read more details about the Masterclass over at the Fullstack React with TypeScript Masterclass page .

Thumbnail Image of Tutorial Fullstack React with TypeScript Masterclass is LIVE πŸŽ‰

Optimistic UIs with React, Apollo Client and TypeScript (Part III) - Handling Errors

Disclaimer - Please read the second part of this blog post here before proceeding. It walks through the steps taken to update a UI optimistically with Apollo Client. However, it does not discuss how to elegantly handle failed mutations, which, by default, automatically undo optimistic updates made to the UI. If sending a message within a messages client causes the UI to update optimistically with this message, then anytime the server encounters an error (e.g., network or GraphQL) while performing the mutation, the message instantly disappears from the UI. For the user to resend the message, they must retype the message in the input field and resend it. Another problem that arises from reverting the optimistic UI update is the loss of the original timestamp the message was sent at since Apollo Client automatically removes the optimistic data from the cache. In the final part of this tutorial series, we will take the messages client that has been built over the course of the past two parts of this tutorial series and implement a more convenient way for users to resend unsent messages (as a result of a network or GraphQL error). When an error occurs during the mutation, the UI should... By the end of this tutorial, you will have recreated a robust optimistic UI that is capable of resending unsent messages like the one found in Messages : To get started for this tutorial, download the project from the part-2 branch of the GitHub repository here and follow the directions in the README.md file to set up the project. One thing you will notice in the codebase is the template literals passed to the gql tags. Each template literal encloses a GraphQL query or mutation that is executed within a component of the application. Let's move these template literal tags out into a separate module and export each as a named export: ( src/graphql/fragments.ts ) Then, visit each component's source file, and anytime you come across one of these template literal tags, import its corresponding named export and replace it accordingly with the imported value. By refactoring these template literal tags, we can centralize all of the template literal tags in a single place. We import gql from the @apollo/client library just once within the entire application (in the src/graphql/fragments.ts file), and we can reuse these template literal tags anywhere in the application. To understand the general strategy we will take to bypass the UI's instantaneous undoing of the optimistic update, we must dive deep into how the Apollo Client handles an optimistic UI update within the in-memory cache. Disclaimer : At the time of this writing, the current version of the Apollo Client library ( @apollo/client ) is v3.5.8. If you are reading this several months/years after the original publication date, then the underlying architecture may have changed. In part 2 of this tutorial series , I mentioned that the Apollo Client creates and stores a separate, optimistic version of the message in the cache. Here, "optimistic version" refers to an optimistic data layer ( Layer layer) that Apollo Client creates on top of the Stump and Root layers of the cache. Each layer of the cache is responsible for managing its own data, whether it is data associated with an optimistic UI update ( Layer layer) or queries ( Root layer). Partitioning data this way makes it easy to identify which set of optimistic changes (made to the cache's data) to undo when the GraphQL API server returns the result of a mutation. When you inspect the optimistic layer via a JavaScript debugger in the developer tools, you will find that the layers reference each other via a parent property. The deeply nested Root layer holds all the data associated with queries (i.e., the messages seen in the messages client), the Stump layer holds no data and the optimistic layer ( Layer layer) holds all data associated with an optimistic UI update (i.e., the sent message). The Stump layer serves as a buffer (between the optimistic data layers and root data layer) that allows subsequent optimistic updates to invalidate the cached results of previous optimistic updates. With all the Layer layers sharing the Stump layer, all optimistic reads read through this layer. As a buffer, no data is ever written to this layer, and look up and merge calls skip over this layer and get forwarded directly to the Root layer. Note : For more information, please read the pull request that introduced the Stump layer to the cache. Whenever the mutate function is called, the Apollo Client checks if an optimisticResponse option is provided for the mutation. If so, then the Apollo Client marks the mutation as optimistic and wraps the optimistic write within a transaction . When performing this transaction , the Apollo Client adds a new optimistic data layer to the cache . Notice how there are twenty-one messages on the optimistic data layer (the twenty original messages queried from the GraphQL API server plus the message added via the optimistic update) and twenty messages on the root data layer (the twenty original messages queried from the GraphQL API server). Once the cache finishes updating with the new optimistic data layer, broadcastQueries gets called. All active queries listening to changes to the cache will update, which causes the UI to also update. Since broadcastQueries is an asynchronous operation, the UI may not immediately update with the optimistic data even if the debugger has moved on to the next breakpoint. By isolating all of the optimistic updates (carried out by this transaction) to this layer, the Apollo Client never merges the optimistic data with the cache's root-level data. This ensures that the optimistic updates can be easily undone, such as on a mutation error , by deleting the optimistic data layer and readjusting the remaining optimistic data layers (from other pending mutations) , all without ever touching the root-level data. If the mutation fails (or succeeds), then by calling broadcastQueries , the Apollo Client updates the active queries based on recent updates made to the cache, which no longer has the optimistic data layer for the addMessage mutation. This removes the sent message from the UI. Now that we know how the cache works, let's devise a solution that keeps the sent message shown even when the addMessage mutation fails. Given that the first broadcastQueries call updates the UI with the optimistic data (the sent message) and the last broadcastQueries call undoes updates to the UI that involve the optimistic data, we need to add the sent message to the cache's root-level data at the moment the mutation fails between these two calls. This duplicate message will have an identifier of ERROR/<UUID> to... While both pieces of data will exist at the same time, only the message on the optimistic data layer, not the duplicate message on the root data layer, will be rendered. Only the message on the optimistic data layer existed within the cache at the time of the first broadcastQueries call. By adding it to the root data layer, the duplicate message will still exist by the time the last broadcastQueries call occurs. However, since the optimistic data layer gets removed just before this broadcastQueries call, the message on the optimistic data layer will no longer exist. Both messages contain the same text. Hence, nothing seems to change on the UI. The user never sees the optimistic data disappear. Both messages never get rendered together. For us to write the duplicate message to the cache, we must add an onError link to the Apollo Client's chain of link objects. The onError link listens for networking and GraphQL errors during a GraphQL operation (e.g., a query or mutation) and runs a callback upon encountering a networking/GraphQL error. Currently, the Apollo Client already uses the HTTPLink link to send a GraphQL operation to a GraphQL API server that performs it and responds back with either a result or error. Since each link represents a piece of logic to apply to a GraphQL operation, we must connect this link with the onError link to create a single link chain. Let's do this with the from method, which additively composes the links. As a terminating link, the HTTPLink link ends up being the last link in the chain. Defining the links in this order allows the server's response to bubble back up to the onError link. Within the callback passed to the onError link, we can check for any networking/GraphQL errors. If the response is successful, then the onError link simply ignores the response's data as it makes its way to the cache. Shortly after the first broadcastQueries call, the addMessage mutate function executes getObservableFromLink , which obtains the observable of the Apollo Client's link. Unlike promises, observables are lazily evaluated, support array-like methods (e.g., map and filter ) and can push multiple values. Then, the addMessage mutate function invokes the observable by subscribing to it. Essentially, invoking the observable of the HTTPLink link sends a request for this mutation to the GraphQL API server. Note : If you're unfamiliar with observables, you can learn more about them here . Developers who have worked with Angular and/or RxJS have likely previously come across observables. If the Apollo Client encounters a networking/GraphQL error, then the onError link's callback gets called. This callback logs the caught error and checks the name of the GraphQL operation. If the name of the GraphQL operation happens to be AddMessage , which corresponds to the AddMessage mutation, then the Apollo Client adds a message with the same text and sender as the originally sent message to the root data layer of the cache. We create this message based on the variable values provided to the mutation: text and userId . Note : We can pass true as the second argument to the readQuery method ( const { messages } = cache.readQuery({ query: GET_MESSAGES }) ) to include the optimistic data (i.e., the message on the optimistic data layer) in the list of queried messages. However, there are some caveats to this approach. if there are multiple ongoing AddMessage mutations, then the list will also include the optimistic data from those mutations. Therefore, filtering the messages by text and userId to find the optimistically created message is unreliable, especially if the user has just sent several, pending, consecutive messages with the same text. This makes it difficult to modify the optimistically created message's id to follow the ERROR/<UUID> format. Also, because there is a very small discrepancy between this message's timestamp and the timestamp recorded within the callback, the ordering of the messages is unaffected. Finally, the error callback of the observable receives the error , removes the optimistic data layer associated with the mutation and calls broadcastQueries to update the active queries. Since we added a copy of the message to the cache before the optimistic data layer was removed, the view will still display the message to the user. Within the src/index.tsx file, let's add the onError link to the Apollo Client's link chain, like so: ( src/index.tsx ) In the browser, wait for the application to fully refresh. Select a user. Once the application loads the messages, open the developer tools. Under the network tab of the developer tools, select the "Offline" option under the throttling dropdown to simulate zero internet connectivity. When you try to send a message, the message will remain on the UI even if the mutation fails. Now the user no longer has to re-type the message to send it. Plus, the timestamp at which it was originally sent will be preserved (more or less). Click here for a diagram that visually explains the solution. At this point, the sent messages and unsent messages look identical. Each one has a blue background with white text. To distinguish sent messages from unsent messages, let's add a button next to each unsent message that could not be sent as a result of a failed mutation. The button will be displayed as a red exclamation point icon. When clicked on, a dialog will pop open, and it will ask the user for confirmation to resend the message (at its original createdAt timestamp). Within the JSX of the <MessagesClient /> component, next to the <p /> element with the message's text, we need to check if the message's sender is the current user (since unsent messages displayed in the client belong to the current user) and the message's ID for an ERROR substring. Messages that satisfy these two checks are unsent messages. Note : Optimistically created messages do not have the ERROR substring in their IDs, so they are unaffected by this change. This ERROR substring gets introduced only after a failed mutation. Let's define the handleOnRetry function. It accepts the message as an argument. When executed, the function pops open a dialog that asks the user for confirmation to resend the message. Once the user confirms, the Apollo Client performs the AddMessage mutation, but this time, with the variables isRetry and createdAt . These two optional variables tell the AddMessage mutation's resolver to set the created message's timestamp to the timestamp provided by the createdAt variable, not the server's current timestamp. This ensures the messages are in the correct order the next time the application fetches the list of messages from the server. Visit the Codesandbox for the server here Β for the implementation of the AddMessage mutation's resolver. If the mutation successfully completes, then the update callback function gets called with the Apollo Client cache and the result of the mutation. We extract out the message returned for the AddMessage mutation and update the currently cached message with the ID The updateFragment method fetches a Message object with an ID of Message:ERROR/<UUID> and replaces the fetched Message object's ID with the returned message's ID. With this update, the cached message will no longer be recognized as an unsent message. The fragment determines the shape of the fetched data. Within the src/types/fragments.ts file, define and export the CACHE_NEW_MESSAGE_FRAGMENT fragment. ( src/types/fragments.ts ) If the mutation ends up being unsuccessful, then we must prevent the onError link from duplicating the cached unsent message. If we passed an isRetry variable to the AddMessage mutation, then we should skip the duplication. Since we added two optional variables that can be passed to the AddMessage mutation, isRetry and createdAt , let's make several adjustments to account for these variables. Within the src/types/index.ts file, add the optional properties isRetry and createdAt to the AddMessageMutationVariables interface: Within the src/graphql/fragments.ts file, add the isRetry and createdAt variables to the AddMessage mutation string: If you look at the CodeSandbox for the server, then you will notice that... After making these changes, here's how the resend functionality should look: Lastly, let's render a "Delivered" status text beneath the current user's last sent message once the message's corresponding AddMessage mutation successfully completes. Within the <MessagesClient /> component, define the state variable lastDeliveredMessageId and update function setLastDelieveredMessageId with the useState Hook. lastDeliveredMessageId stores the ID of the last message sent by the current user. In the useMutation call, include within the passed options an onCompleted callback. This callback gets called as soon as the mutation's result data is available. Inside this callback, call setLastDeliveredMessageId with the sent message's ID. Within the JSX of the <MessagesClient /> component, place the "Delivered" status text right next to the <div className="mt-0.5" /> element. If the message could not be sent, then display a "Not Delivered" status text. Note : The i index comes from the map method's arguments. Reset the ID anytime the user attempts to send a message. So altogether, here's how everything should look: ( src/components/MessagesClient.tsx ) ( src/graphql/fragments.ts ) ( src/types/index.ts ) ( src/index.tsx ) If you find yourself stuck at any point while working through this tutorial, then feel free to visit the main branch of this GitHub repository here for the code. Try implementing optimistic UI patterns into your applications and reap the benefits of a faster, more fluid user experience. If you want to learn more advanced techniques with Apollo Client, GraphQL, React and TypeScript, then check out Fullstack React with TypeScript and Fullstack Comments with Hasura and React :

Thumbnail Image of Tutorial Optimistic UIs with React, Apollo Client and TypeScript (Part III) - Handling Errors

Optimistic UIs with React, Apollo Client and TypeScript (Part II) - Optimistic Mutation Results

Disclaimer - Please read the first part of this blog post here before proceeding. It walks through the initial steps of building a messages client that fetches messages from a GraphQL API server. If you are already familiar with the basics of Apollo Client, and only want to know how to update a UI optimistically (for mutation results), then download the project from the part-1 branch of the GitHub repository here and follow the directions in the README.md file to set up the project. In the second part of this tutorial series, we will implement the remaining functionality of the messages client: By the end of this tutorial, you will have recreated the optimistic UI found in Messages : For a user to send a message, the message client must send a request to the GraphQL API server to perform an addMessage mutation. Using the text sent by the user, this mutation creates a new message and adds it to the list of messages managed by the server. The addMessage mutation, which is defined in the GraphQL schema below, expects values for the text and userId variables. The text variable holds the new message's text, and the userId variable holds the ID of the user who sent the message. Once it finishes executing the mutation's resolver, the server responds back with the sent message. Unlike Apollo Client's useQuery Hook, which tells a GraphQL API server to perform a query (fetch data), Apollo Client's useMutation Hook tells a GraphQL API server to perform a mutation (modify data). Like the useQuery Hook, the useMutation Hook accepts two arguments: And returns a tuple with a mutate function and a result object: In the above snippet, the mutate function is named addMessage . The mutate function lets you send mutation requests from anywhere within the component. The result object contains the same properties as the result object returned by the useQuery Hook, such as data , loading and error . For the <MessagesClient /> component, the application can ignore this result object from the tuple. Since the mutation should cause the UI to update optimistically, the application does not need to present a loading message to indicate a mutation request being sent and processed. Therefore, it does not need to know when the mutation is in-flight. As for the data returned as a result of a successful mutation and the errors that the mutation may produce, we will handle those later on in this tutorial. The <MessagesClient /> component calls the useMutation Hook and destructures out the mutate function (naming it addMessage ) from the returned tuple. The <MessagesClient /> component contains an input field for the current user to type and send messages. First, let's create a ref and attach it to the <input /> element. The ref gives you access to the text typed into the <input /> element. Then, let's attach an event handler (named handleOnSubmit ) to the input field's parent <form /> element's onSubmit attribute that executes the addMessage mutate function when the user sends a message (submits the form with a non-empty input field). The handler calls addMessage , passing in (as the argument) an options object with a variables option, which specifies the values of all the variables required by the mutation. The addMessage mutation requires two variables: Once you finish making these adjustments to the <MessagesClient /> component, run/reload the application. When you send a message, the UI does not update with the message you just sent despite the server successfully processing the mutation request and sending back a response with this message. Typing message into input field: Sending message (submitting the form): Checking the response from the server: To update the UI with this message, we must update the messages stored in the Apollo Client cache. By default, Apollo Client stores the results of GraphQL queries in this local, normalized cache and uses cache APIs, such as cache.writeQuery and cache.modify , to update cached state. Anytime a field within the cache gets modified, queries with this field automatically refresh, which causes the components using these queries to re-render. In this case, the query within the <MessagesClient /> component has a messages field. Once the query resolves and returns with data that has this field (set to a list of messages), like this: Apollo Client normalizes and stores the data in the cache as a flat lookup table by... Β  Β  Β  Β  Β  As a result, the root level of the cache serves as a flat lookup table. To see this in action, add the update caching option to the addMessages options argument, and set this option to a function with two parameters: Apollo Client executes the update function after the addMessage mutation completes. When you log cache , you will see that cache contains metadata and methods available to an instance of InMemoryCache , which was specified as the Apollo Client's cache. Upon further inspection, you will find the fields messages and users (from the query operations) under cache.data.data.ROOT_QUERY . LoggingΒ  cache : Inspecting ROOT_QUERY Β in the cache: Notice how Apollo Client stores the query responses using a normalization approach. The cache only keeps one copy of each piece of data, adjacent to ROOT_QUERY . This reduces the amount of data redundancy in the cache. Instead of repeating the same user data across every message, each message in the cache references the user by a unique identifier. This lets the Apollo Client to easily locate the user's data in its cache's flat lookup table. When you log mutationResult , you will see the message you just sent. To render this message to the UI, we must add it to the list of messages already cached by the Apollo Client. Any changes made to the cached query results gets broadcasted across the application and re-renders the components with those active queries. Within the update function, check if the mutation has completed and returned the sent message, like so: Then, we will directly modify the value of the cache's messages field with the cache.modify method. This method takes a map of modifier functions based on the fields that should be changed. Each modifier function supplies its field's current cached value as a parameter, and the value returned by this function replaces the field's current value. In this case, the map will only contain a single modifier function for the messages field. Note : cache.modify overwrites fields' values. It does not merge incoming data with fields' values. If you log existingMessagesRefs , then you see that it points to the value of the messages field under ROOT_QUERY (a list of objects with references to the actual messages in the cache's flat lookup table). To add the message to the cache, the modifier function must... Note : The fragment option of the argument passed to the cache.writeFragment method determines the shape of the data to write to the cache. It should match the shape of the data specified by the query. Reload the application. When you send a message, the UI now updates with the message you just sent. However, this UI update happens after , not before , the server successfully processes the mutation request and sends back a response with this message. To better observe this, simulate slow network speeds in the developer console and send a message again. Let's make the UI feel more responsive by optimistically updating it when the user sends a message. Displaying the mutation result before the GraphQL API server sends back a response gives the illusion of a performant UI and keeps the user engaged in the UI without any delays. When the server eventually sends back a response, the result from the server replaces the optimistic result. If the server fails to persist the data, then Apollo Client rollbacks the optimistic UI updates. To optimistically update the UI to show the message immediately after the user sends a message, provide an optimisticResponse option to the addMessages options argument, and set this option to the message that should be added to the cache. The message must be shaped exactly like the message returned by the addMessage mutation, and it must also include id and __typename attributes so that the Apollo Client can generate the unique identifiers necessary for the cache to remain normalized. Any optimistic update for the addMessage mutation goes through an optimistic mutation lifecycle : Altogether... ( src/components/MessagesClient.tsx ) If you find yourself stuck at any point while working through this tutorial, then feel free to visit the part-2 branch of this GitHub repository here for the code. If you simulate offline behavior in the developer tools and try to send a message, then you will see that the UI optimistically updates with the message for a brief moment before removing it from the UI. In the Messages app, when a message fails to be delivered, the message remains in the UI, but a red exclamation point icon appears next to the message to give the user a chance to resend the message. Continue on to the final part of this tutorial here , to learn how to handle such situations in optimistic UIs.

Thumbnail Image of Tutorial Optimistic UIs with React, Apollo Client and TypeScript (Part II) - Optimistic Mutation Results

Optimistic UIs with React, Apollo Client and TypeScript (Part I) - Project Overview

Liking a tweet on Twitter. Marking an e-mail as read in your G-Mail inbox. These type of simple, low-stake actions seem to happen so quickly that you can perform one action after another without having to wait for the previous to finish resolving. As the defining trait of optimistic UIs , these actions give the feeling of a highly responsive and instant UI. Psychologically speaking, they trick the user into thinking that an action has completed even though the network request it sends to the server has not been fully processed. Take, for example, the like button of a tweet. You can scroll through an entire feed and like every single tweet with zero delays between successive tweets. To observe this, open up a Twitter feed and your browser's developer console. Within the developer console, switch to the network tab and select the "Slow 3G" option under the throttling dropdown to simulate slow 3G network speeds. Slowing down network speeds lets us see the UI updates happen before the server returns a response for the action. Then, filter for network requests sent to a GraphQL API endpoint containing the text "FavoriteTweet" (in the request URL), which tells the server to mark the tweet as liked by the current user. When you click on a tweet's like button, the heart icon disappears, the like count increments by one and the text color changes to pink despite the network request still pending. While the server handles this request, the updates to the UI give the illusion that the server already finished processing the request and returned a successful response. In the below GIF, you can watch how liking multiple tweets, one after the other, immediately increments the like count of each tweet by one on the UI even if the server is busy working on previous requests. The user gets to like as many tweets as they want without waiting on any responses from the server. Upon receiving a response back from the server, the heart icon of the like button fades back in with an animation. Here's what a normal implementation of the like button might look like: Here's what Twitter's implementation of the like button looks like: Note : Twitter's UI never disables the like button. In fact, you can click on the like button as many times as you like. The UI will be updated accordingly, and the network requests for every click get sent to the server. By building UIs in this manner, the application's performance depends less on factors like the server's status/availability and the user's network connectivity. Since humans, on average, have a reaction time of 200 to 300 milliseconds , being delayed for this amount of time (or more) between actions (due to server response times) can cause not only a frustrating user experience, but also, hurt the brand's image. Being known for having a slow, unreliable, unresponsive UI makes users less likely to enjoy and engage with the UI. As long as the user perceives actions as being instant and working seamlessly, they won't ever question the application's performance. The key to adopting optimistic UI patterns is understanding the meaning of the word "optimistic." Optimistic means being hopeful and confident that something good will occur in the future. In the context of optimistic UIs, we should be confident that for some user action, the server, in at least 99% of all cases, returns a successful response, and in less than 1% of all cases, the server returns an error. In most situations, low-stake actions tend to be ideal candidates when deciding where to apply optimistic UI patterns. To determine whether an action is a low-stake action, ask yourself these questions: If the answer to all these questions is yes, then the action is a low-stake action, and thus, can update the UI optimistically with more benefits to the user experience than drawbacks. In the case of Twitter's like button: On the other hand, you should not consider optimistic UI patterns for high-stake actions, especially those involving very important transactions. For example, could you imagine a bank site's UI showing you that your check was successfully deposited, and then discovering days later, when you have to pay a bill due the next day, that it was not deposited because of the server happened to be experiencing a brief outage during that time? Think about how angry you would be at the bank and how this might sour your perception of the bank. Integrating optimistic UI updates into an application comes with challenges like managing local state such that results of an action can be simulated and reverted. However, applications built with React and Apollo Client have the necessary tools, features and APIs for easily creating and maintaining optimistic UIs. Below, I'm going to show you how to recreate a well-known optimistic UI found in a popular iOS app, Messages , with React and Apollo Client. When a user sends a message, the message appears to have been sent successfully even if the server has not yet finished processing the request. Once the server returns a successful response, there are no changes made to the UI except for a "Delivered" status text being shown beneath the most recently sent message. To get started, scaffold a new React application with the Create React App and TypeScript boilerplate template. For this project, we will be building a "public chatroom" that lets you choose which user to send messages as: Upon picking a user, the application displays the messages from the perspective of the selected user, and you can send messages as this user. Note : This server does not support real-time communications since that's outside the scope of this tutorial. You can add functionality for real-time communications with GraphQL subscriptions. Next, clone (or fork) the following GraphQL API server running Apollo Server. https://codesandbox.io/embed/apollo-server-public-chat-room-for-optimistic-ui-example-srb5q?fontsize=14&hidenavigation=1&theme=dark This server defines a GraphQL schema for a basic chat application with two object types: User and Message . It comes with a query type (for fetching user/s and message/s) and a mutation type (for adding a new message to the existing list of messages). Initially, this server is seeded with two users and twenty messages. Each resolver populates a single field with this seeded data that is stored in memory. Within the newly created React application, let's install several dependencies: Since the application will be styled with Tailwind CSS , let's set up Tailwind CSS for this application. Within the tailwind.config.js file, add the paths glob pattern ./src/**/*.{js,jsx,ts,tsx} to tell Tailwind which type of files contain React components. Since the UI features an input field, we should also the @tailwindcss/forms plugin with the strategy option set to class to leverage Tailwind CSS form component styles via CSS classes. ( tailwind.config.js ) Delete the src/App.css file and remove all of the default CSS rules in the src/index.css file. Within this file, add the standard @tailwind directives: ( src/index.css ) Add several empty directories to the src directory: To initialize an ApolloClient instance, import the Apollo Client and pass it a configuration object with two options: To make the Apollo Client instance available throughout the entire React application, wrap the <App /> component within the provider component <ApolloProvider> , which uses React's Context API. Here's what the src/index.tsx file should look like: ( index.tsx ) The application contains two child components: Since both components must know who the current user is, and the <UsersList /> component sets the current user, let's define a React context AppContext to make the current user globally available to the application's component tree. Within the src/context directory, add an index.ts file: Then, define the React context AppContext . Its value should contain a reference to the current user ( currentUser ) and a method for setting the current user ( changeCurrentUser ). ( src/contexts/index.ts ) Although we initialize the value of AppContext to an empty object, we will later set this context's value in the <App /> component, where we will pass it its actual value via its provider component's value prop. The AppContextInterface interface enforces the types allowed for each method and value specified in the context's value. You may notice a User type that is imported from a src/types/index.ts file. Within the src/types directory, add an index.ts file: Based on the GraphQL schema, define a User interface. ( src/types/index.ts ) Within the src/App.tsx file, import AppContext and wrap the child components and elements of the <App /> component with the AppContext.Provider provider component. Inside of the <App /> component's body, we define a state variable currentUser , which references the currently selected user, and a method changeCurrentUser , which calls the setCurrentUser update function to set the current user. Both currentUser and changeCurrentUser get passed to the AppContext.Provider provider component's value prop. These values satisfy the AppContextInterface interface. ( src/App.tsx ) The <UsersList /> component fetches a list of users from the GraphQL API server, whereas the <MessagesClient /> component fetches a list of messages from the GraphQL API server. To fetch data from a GraphQL API server with Apollo Client, use the useQuery Hook. This Hook executes a GraphQL query operation. It accepts two arguments: And returns a result object, which contains many properties. These are the most commonly used properties: These properties represent the state of the query and change during its execution. They can be destructured from the result object and referenced within the function body of the component, like so: For more properties, visit the official Apollo documentation here . Once it successfully fetches data from the GraphQL API server, Apollo Client automatically caches this data locally within the cache specified during its initialization (i.e., an instance of InMemoryCache ). Using a cache expedites future executions of the same queries. If at a later time, Apollo Client executes the same query, then Apollo Client can get the data directly from the cache rather then having to send (and wait on) a network request. Within the src/components/UsersList.tsx file, define the <UsersList /> component, which... ( src/components/UsersList.tsx ) Once the data has been successfully fetched, the component renders a list of users who are members of the "public chatroom." When you click on one of the users, you select them as the current user. A check mark icon appears next to the user's name to indicate that it has been selected. Since the query returns a list of users, the UsersQueryData interface contains a users property that should be set to a list of User items, like so: ( src/types/index.ts ) Note : It should match what's specified by the GraphQL query string that's passed to the useQuery Hook. To refresh the cached data with the latest, up-to-date data from the GraphQL API server, you can: To know when Apollo Client is refetching (or polling) the data, destructure out the networkStatus value from the result object, and check if it equals NetworkStatus.refetch , which indicates an in-flight refetch, or if it equals Network.poll , which indicates an in-flight poll. Note : The notifyOnNetworkStatusChange networking option tells Apollo Client to re-render the component whenever the network status changes (e.g., when a query is in progress or encounters an error). For a full list of network statuses you can check for, click here . Like the <UsersList /> component, the <MessagesClient /> component also fetches data (in this case, a list of messages) by calling the useQuery Hook. When rendering the messages, the current user's messages are aligned to the right-side of the messages client. These messages have a blue background with white text. All other messages are aligned to the left-side of the messages client. By adding the sender's initials and name to each of these messages, we can tell who sent which message. ( src/components/MessagesClient.tsx ) All that's missing from the message client, UI-wise, is an input field for sending messages. Below the messages, add a form with an input field and send button. Altogether... ( src/components/MessagesClient.tsx ) If you find yourself stuck at any point while working through this tutorial, then feel free to visit the part-1 branch of this GitHub repository here for the code. Thus far, we learned how companies like Twitter adopt optimistic UI patterns to deliver faster, snappier user experiences. We set up the project with Apollo Client, Tailwind CSS and TypeScript, and we built a UI that queries data from a GraphQL API server. Continue on to the second part of this tutorial here , in which we implement the remaining functionality: Specifically, we will dive into the useMutation Hook and learn how to manipulate data within the Apollo Client cache to update the UI optimistically.

Thumbnail Image of Tutorial Optimistic UIs with React, Apollo Client and TypeScript (Part I) - Project Overview

Static Site Generation with Next.js and TypeScript - Project Overview

Many of today's most popular web applications, such as G-Mail and Netflix, are single-page applications (SPAs). Single-page applications deliver highly engaging and exceptional user experiences by dynamically rendering content without fully reloading whole pages. However, because single-page applications generate content via client-side rendering, the content might not be completely rendered by the time a search engine (or bot) finishes crawling and indexing the page. When it reaches your application, a search engine will read the empty HTML shell (e.g., the HTML contains a <div id="root" /> in React) that most single-page applications start off with. For a smaller, client-side rendered application with fewer and smaller assets and data requirements, the application might have all the content rendered just in time for a search engine to crawl and index it. On the other hand, for a larger, client-side rendered application with many and larger assets and data requirements, the application needs a lot more time to download (and parse) all of these assets and fetch data from multiple API endpoints before rendering the content to the HTML shell. By then, the search engine might have already processed the page, regardless of the content's rendering status, and moved on to the next page. For sites that depend on being ranked at the top of a search engine's search results, such as news/media/blogging sites, the performance penalties and slower first contentful paint of client-side rendering may lower a site's ranking. This results in less traffic and business. Such sites should not client-side render entire pages worth of content, especially when the content infrequently (i.e., due to corrections or redactions) or never changes. Instead, these sites should serve the content already pre-generated as plain HTML. A common strategy for pre-generating content is static site generation . This strategy involves generating the content in advance (at build time) so that it is part of the initial HTML document sent back to the user's browser when the user first lands on the site. By exporting the application to static HTML, the content is created just once and reused on every request to the page. With the content made readily available in static HTML files, the client has much less work to perform. Similar to other static assets, these files can be cached and served by a CDN for quicker loading times. Once the browser loads the page, the content gets hydrated and maintains the same level of interactivity as if it was client-side rendered. Unlike Create React App , popular React frameworks like Gatsby and Next.js have first-class, built-in static site generation support for React applications. With the recent release of Next.js v12, Next.js applications build much faster with the new Rust compiler (compared to Babel, this compiler is 17x faster). Not only that, Next.js now lets you run code on incoming requests via middleware, and its APIs are compatible with React v18. In this multi-part tutorial, I'm going to show you how to... We will be building a simple, statically generated application that uses the Petfinder API to display pets available for adoption and recently adopted. All of the site's content will be pre-rendered in advance with the exception of pets available for adoption, which the user can update on the client-side. Home Page ( / ): Listings for Pet Animal Type ( /types/<type> ): Visit the live demo here: https://petfinder-nextjs.vercel.app/ To get started, initialize the project by creating its directory and package.json file. Note : If you want to skip these steps, then run the command npx create-next-app@latest --ts to automatically scafford a Next.js project with TypeScript. Then, proceed to the next section of this tutorial. Install the following dependencies and dev. dependencies: Add a .prettierrc file with an empty configuration object to accept Prettier's default settings. Add the following npm scripts to the package.json file: Here's what each script does: At the root of the project directory, create an empty TypeScript configuration file ( tsconfig.json ). By running next , Next.js automatically updates the empty tsconfig.json file with Next.js's default TypeScript configuration. ( tsconfig.json ) Additionally, this command auto-generates a next-env.d.ts file at the root of the project directory. This file guarantees Next.js types are loaded by the TypeScript compiler. ( next-env.d.ts ) To further configure Next.js, create a next.config.js file at the root of the project directory. This file allows you to override some of Next.js's default configurations, such as the project's base Webpack configurations and mapping between incoming request paths and destination paths. For now, let's just opt-in to React's Strict Mode to spot out any potential problems, such as legacy API usage and unsafe lifecycles, in the application during development. ( next.config.js ) Similar to the tsconfig.json file, running next lint automatically installs the eslint and eslint-config-next dev. dependencies. Plus, it creates a new .eslintrc.json file with Next.js's default ESLint configuration. Note : When asked "How would you like to configure ESLint?" by the CLI, select the "Strict" option. ( eslintrc.json ) This application will be styled with utility CSS rules from the Tailwind CSS framework . If you are not concerned with how the application is styled, then you don't have to set up Tailwind CSS for the Next.js application and can proceed to the next section. Otherwise, follow the directions here to properly integrate in Tailwind CSS. To register for a Petfinder account, visit the Petfinder for Developers and click on "Sign Up" in the navigation bar. Follow the registration directions. Upon creating an account, you can find the API key (passed as the client ID in the request payload to the POST https://api.petfinder.com/v2/oauth2/token endpoint) and secret (passed as the client secret in the request payload to the POST https://api.petfinder.com/v2/oauth2/token endpoint) under your account's developer settings. Here, you can track your API usage. Each account comes with a limit of 1000 daily requests and 50 requests per second. At the root of the project directory, create a .env file with the following environment variables: ( .env ) Replace the Xs with your account's unique client ID and secret. The home page features a grid of cards. Each card represents a pet animal type catalogued by the Petfinder API: These cards lead to pages that contain listings of pets recently adopted and available for adoption, along with a list of breeds associated with the pet animal type (e.g., Shiba Inu and Golden Retriever for dogs). Suppose you have to build this page with client-side rendering only. To fetch the types of animal pet from the Petfinder API, you must: Initially, upon visiting the page, the user would be presented with a loader as the client... Having to wait on the API to process these two requests before any content is shown on the page only adds to a user's frustrations. Wait times may even be worst if the API happens to be experiencing downtime or dealing with lots of traffic. You could store the access token in a cookie to avoid sending a request for a new access token each time the user loads the page. Still, you are left with sending a request for a list of types each time the user loads the page. Note : For stronger security (i.e., mitigate cross-site scripting by protecting the cookie from malicious JavaScript code), you would need a proxy backend system that interacts with the Petfinder API and sets an HttpOnly cookie with the access token on the client's browser after obtaining the token from the API. More on this later. This page serves as a perfect example for using static site generation over client-side rendering. The types returned from the API will very rarely change, so fetching the same data for each user is repetitive and unnecessary. Rather, just fetch this data once from the API, build the page using this data and serve up the content immediately. This way, the user does not have to wait on any outstanding requests to the API (since no requests will be sent) and can instantly engage with the content. With Next.js, we will leverage the getStaticProps function, which runs at build time on the server-side. Inside this function, we fetch data from the API and pass the data to the page component as props so that Next.js pre-renders the page at build time using the data returned by getStaticProps . Note : In development mode ( npm run dev ), getStaticProps gets invoked on every request. Now, within the root of the project directory, create a pages directory, which will contain all of the page components. Next.js's file-system based router maps page components to routes. For example, pages/index.tsx maps to / , pages/types/index.tsx maps to /types and pages/types/[type.tsx] maps to types/:type ( :type is a URL parameter). Create three more directories: The Petfinder API documentation provides example responses for each of its endpoint. With these responses, we can define interfaces for the responses of endpoints related to pet animals, pet animal types and pet animal breeds. Create an interfaces directory within the shared directory. Inside of the interfaces directory, create a petfinder.interface.ts file. ( shared/interfaces/petfinder.interface.ts ) Note : This tutorial skips over endpoints related to organizations. Inside of the pages directory, create an index.tsx file, which corresponds to the home page at / . Let's build out the home page by first defining the <HomePage /> page component's structure. ( pages/index.tsx ) Let's create the <TypeCardsGrid /> component, which renders a grid of cards (each represents a pet animal type). The component places the cards in a 4x2 grid layout for large screen sizes (width >= 1024px), 3x3 grid layout for medium screen sizes (width >= 768px), 2x4 grid layout for small screen sizes (width >= 640px) and a single column for mobile screen sizes (width < 640px). ( components/TypeCardsGrid.tsx ) Let's create the <TypeCard /> component, which renders a card that represents a pet animal type. The card shows a generic picture of the pet animal type and a link to browse listings (recently adopted and available for adoption) of pet animals of this specific type. Note : The types returned from the Petfinder API do not have an id property, which serves a both a unique identifier and a URL slug (e.g., the ID of type "Small & Furry" is "small-furry"). In the next section, we will create a helper method that takes a type name and turns it into an ID. ( components/TypeCard.tsx ) Since the Petfinder API does not include an image for each pet animal type, we can define an enumeration ANIMAL_TYPES that supplements the data returned from the API with an Unsplash stock image for each pet animal type. To account for the images' different dimensions in the <AnimalCard /> component, we display the image as a background cover image of a <div /> and position the image such that the animal in the image appears in the center of a 10rem x 10rem circular mask. ( .enums/index.ts ) Like single-page applications that don't fully reload the page when navigating between different pages, Next.js also lets you perform client-side transitions between routes via the Link component of next/link . This component wraps around an <a /> element, and the href gets passed to the component instead of the <a /> element. When built, the generated markup ends up being just the <a /> element, but having an href attribute and having the same behavior as a <Link /> component.

Thumbnail Image of Tutorial Static Site Generation with Next.js and TypeScript  - Project Overview

Annotating React Styled Components with TypeScript

Styled components redefine how we apply CSS styles to React components. Unlike the traditional approach of manually assigning CSS classes (from an imported CSS file) to elements within a component, CSS-in-JS libraries like styled-components provide primitives for locally scoping CSS styles to a component with unique, auto-generated CSS classes. Consider a simple <Card /> component composed of several styled React components. Each of styled 's helper methods corresponds to a specific DOM node: Here, the <Card /> component's styles, structure and logic can all be found within a single file. styled-components comes up with a unique class name for each set of CSS rules, feeds the CSS to a CSS preprocessor (Stylis), places the compiled CSS within a <style /> tag, injects the <style /> tag into the page and adds the CSS classes to the elements. By tightly coupling components to their styles, we can easily maintain CSS in large React applications. If we need to edit anything about a component, whether it be its color or how it responds to user input, then we can visit one file, which keeps everything related to the component colocated, to make the necessary changes. Unique class names prevent naming collisions with existing class names. Popular CSS-in-JS libraries, such as styled-components and Emotion , come with TypeScript definitions. When pairing styled components with TypeScript, our React application gains all the benefits of a statically typed language while also ensuring component styles remain well-organized. As we write our styled components, our IDE can warn us of any incorrect arguments passed to any of the helper methods, detect typos, perform autocompletion, highlight missing props, etc. Below, I'm going to show you how to annotate React styled components with TypeScript. To get started, scaffold a new React application with the Create React App and TypeScript boilerplate template. Within this new project, install the styled-components library and @types/styled-components , which provides type definitions for styled-components . Within the src directory, create a new directory, components , which contains the React application's components. Within this new directory, create a new file Card.tsx , which contains a <Card /> component. Copy and paste the <Card /> component's source code (from above) into this file. For React Native projects, you would need to install an additional set of type definitions: Then, you must add styled-components-react-native to the list of types in tsconfig.json : ( tsconfig.json ) Suppose we wanted to annotate the example <Card /> component previously mentioned. Annotating a React functional component requires: ( src/components/Card.tsx ) If we want to add to or override the <Card /> component's styles within a parent component, then we need the <Card /> component to accept an optional className prop. By default, all properties are required, but those marked with a question mark are considered optional. ( src/components/Card.tsx ) Now the <Card /> component can be modified from a parent component. ( src/App.tsx ) Suppose a styled component's inner DOM element must be accessed by a parent component. To forward a ref to a styled component, we must pass the styled component to the forwardRef function, which forwards the ref to an inner DOM element that the styled component renders. Annotating a styled component that receives a forwarded ref involves two steps: ( src/components/Card.tsx ) The parent <App /> can obtain a ref to the underlying <div /> element and access it whenever it needs to register event listeners on the component, read/edit DOM properties, etc. ( src/App.tsx ) styled-components lets you easily customize your React application's theme via the <ThemeProvider /> wrapper component, which uses the context API to allow all children components underneath it to have access to the current theme. ( index.tsx ) When adapting styles based on props, TypeScript automatically recognizes the theme property on the props object. Below is a screenshot of passing a function to a styled component's template literal to adapt its styles based on its props. Notice that TypeScript raises no warnings or errors when you reference the props object's theme property. In fact, if you happen to use VSCode and hover over props , then a tooltip with its type definition appears. If you hover over props.theme , then you will see its type definition ThemeProps<any>.theme: any . The any indicates that TypeScript will not raise any warnings no matter what property you try to access from props.theme , even if it does not exist! If I reference a property on the props.theme object that might reasonably be available on it like primary , or something ridiculous like helloWorld , then TypeScript will not raise any warnings for either. Hover over any one of these properties, and a tooltip with the type any appears. By default, all properties on the props.theme object are annotated with the type any . To enforce types on the props.theme object, you must augment styled-components 's DefaultTheme interface, which is used as the interface of props.theme . For now, start by defining the primary property's type as a string (i.e., we might set the primary color of the default theme to red ) in DefaultTheme . ( styled.d.ts ) Note : By default, DefaultTheme is empty . That's why all the properties on the props.theme object are annotated with the type any . Within tsconfig.json , add the newly created styled.d.ts declaration file to the list of included files/directories required by TypeScript to compile the project: ( tsconfig.json ) Now, if you revisit the Card styled component, then you will see TypeScript raising a warning about props.theme.helloWorld since we did not define its type within the DefaultTheme interface. If you hover over props.theme.primary , then a tooltip with its type definition, DefaultTheme.primary: string , appears. Plus, if you revisit the index.tsx file, then you will also find TypeScript warning about the theme object being passed to the <ThemeProvider /> wrapper component: This issue can be resolved by replacing the main property with the primary field: Better yet, you can import DefaultTheme from styled-components and annotate the object with this interface. Back inside Card.tsx , you can refactor the Card styled component by setting background-color directly to the current theme's primary color. To see the final result, visit this CodeSandbox demo: https://codesandbox.io/embed/keen-satoshi-fxcir?fontsize=14&hidenavigation=1&theme=dark Try annotating styled components in your own React applications with TypeScript.

Thumbnail Image of Tutorial Annotating React Styled Components with TypeScript

Storyboarding - The right way to build apps

React Native is a platform for developing apps that can be deployed to multiple platforms, including Android and iOS, providing a native experience. In other words, write once, deploy multiple times . This tenet holds true across most aspects of app development. Take, for example, usability testing. In native development, teams would need to test business logic separately on each platform. With React Native, it only needs to be tested once. The code we write using React Native is good to go on both platforms and, in most cases, covers more than 90% of the entire code base. The React Native platform offers a plethora of options. However, knowing which to use and when comes from understanding how those pieces fit together. For example, do you even need a database, or is AsyncStorage sufficient for your use case? Once you get the hang of the ecosystem around React Native, building apps will become the easy part. The tricky parts are knowing how to set up a strong foundation that helps you build a scalable and maintainable app and using React Native the right way. If we look at the app as a product that our end users will use, we will be able to build a great experience , not just an app. Should that not be the principal aim of using a cross-platform tool? Let's try and break it down. Using React Native we are: Looking at the points above, it's clear that focusing on building our app as a product makes the most sense. Having a clear view of what we are looking to build will help us get there. It will also keep us in check that we are building the right thing, focusing on the end product and not getting lost in the technicalities or challenges of a platform. Storyboarding the app will help us achieve just that. I recommend a Storyboarding approach to build any front-end application, not just apps. This is not the typical storyboard that is created by the design teams, though the idea is similar. These storyboards can be a great way of looking at our app from a technical implementation point of view, too. This step will help us: To start, we will need to go through the wireframe design of the app. The wireframe is sufficient as we will not be focusing on colors and themes here. Next, we will go through every screen and break it down into reusable widgets and elements . The goals of this are multi-fold: For example, let's look at a general user onboarding flow: A simple and standard flow, right. Storyboarding can achieve quite a lot from the perspective of the app's development and structure. Let us see how. Visualize your app from a technical, design, and product standpoint As you will see, we have already defined eight or nine different elements and widgets here. Also, if elements like the search box, company logo, and the cart icon, need to appear on all screens, they can be put inside a Header widget. The process also helps us build consistency across the app. I would recommend building custom elements for even basic native elements like the Text element. What this does is make the app very maintainable. Say, for some reason, the designer decides to change the app's font tomorrow. If we have a custom element, changing that is practically a one-line change in the application's design system. That might sound like an edge case, but I am sure we have all experienced it. What about changing the default font size of the app or using a different font for bold texts or supporting dark mode? The Atomic Design pattern talks about breaking any view into templates, organisms, molecules, and atoms. If you have not heard about it, Atomic Design comes highly recommended, and you can read about it here . Taking a cue from the methodology, we will break down the entire development process into elements and widgets and list out all those that we will require to build the views. How do you do this? The steps are as follows: This process will help streamline the entire development process. You'll end up with a list of widgets and elements that you need to build. This list will work like a set of Lego blocks that will build the app for you. You may end up with a list like this for the e-commerce app: Looking at this list, we might decide to build a carousel widget that works like a banner carousel by passing banners as children, and as a category scroller by passing an array of category icons. If we do this exercise of defining every component for the entire app before we start building, it will improve our technical design and allow us to plan better. The process can also help iron out design inconsistencies as we will be defining all the elements, down to the most basic ones. If, for example, we were to end up with more than four of five primary buttons to define, that could indicate that we need to review the design from a user experience perspective. Source: Google Following this model will make the development approach very modular and set us up for the development phase. By now, we should also have a thorough understanding of: We also have an idea of how the layout of views will look from a technical standpoint: do we need a common header, how will transitions happen if there is animation, and so on. To summarize, we now have a wireframed plan in place that will give us a lot of confidence as we proceed with development.Β  To learn more about building apps with React Native, check out our new course The newline Guide to React Native for JavaScript Developer .

Thumbnail Image of Tutorial Storyboarding - The right way to build apps

Adding TypeScript to a React Native for macOS Project

Forked from Facebook's React Native project, React Native for Windows and macOS ships with Flow by default. When compared to TypeScript , Flow is way less popular, and there are fewer third-party library interface definitions written for Flow than TypeScript. If your project requires a static type checker, then pick TypeScript for its widespread support across many third-party libraries/frameworks and thriving ecosystem. Setting up TypeScript for the React Native codebase will improve code quality and readability by static type checking to verify type safety, identifying potential bugs in the code and explicitly annotating code with types. Plus, if you are coding within an IDE that features intelligent code completion (strongly integrated with TypeScript) like VSCode, then anytime you type in the editor, suggestions and info about variables, function signatures, etc. pop up inline to provide helpful hints and keep you productive. Below I'm going to show you how to... To get started, create a new React Native for macOS project by running these commands, as instructed by the official documentation : Inside of the project directory, delete the .flowconfig configuration file, which is installed by default. Since we will be integrating TypeScript into the project, there's no need to also support another static type checker. Install TypeScript and type definitions for React and React Native. To configure TypeScript, add a tsconfig.json file to the project. ( tsconfig.json ) If you have worked previously on a React project that involved TypeScript, then you should be familiar with most of these configuration options. Nevertheless, if are unclear with any of these configuration options, then consult the TypeScript documentation here . Delete the default App.jsx file and replace it with a App.tsx file. ( App.tsx ) Since the new App.tsx file exports the <App /> component via export const , we must change the import statement of this component from import App from './App' to import {App} from './App' . ( index.js ) Run the application to verify that everything is working properly. If haven't already created a React Native for Windows and macOS project, and you are about the begin one, then you can bootstrap it from the TypeScript-based React Native template . Pass the template name react-native-template-typescript to the react-native init command instead of the standard react-native . Then, run the same exact commands for scaffolding a new React Native for Windows and macOS project. Inside of the macos/<projectname>-macOS/ViewController.m file, correct the casing of moduleName , which must follow the same casing as your project's name (camelcased), not lowercased. For example, if your project name is rnMacTs , then this file will specify moduleName as rnmacts , which is incorrectly cased. Therefore, change this line... ( macos/rnmacts-macOS/ViewController.m ) to this... ( macos/rnmacts-macOS/ViewController.m ) Otherwise, you will encounter the following issue when you try to run the application: Here's the tsconfig.json file that's automatically generated from this template. It is loaded with helpful comments that describe each configuration option's purpose. You can uncomment some of the commented-out configuration options if you need additional checks by TypeScript. ( tsconfig.json ) Here's the App.tsx file that's automatically generated from this template. It's just a TypeScript version of the standard React Native template's App.jsx file. ( App.tsx ) Run the application to verify that everything is working properly. When you make a change within App.tsx , those changes will automatically be reflected in the application due to hot-reloading! Try building your next desktop application with React Native for Windows and macOS and TypeScript!

Thumbnail Image of Tutorial Adding TypeScript to a React Native for macOS Project

Scaffolding a React Component Library with Storybook (Using TypeScript)

When a company develops and releases a new product onto its platform, users expect this product to deliver an experience similar to another product (on the platform) they have worked with. For example, many people, including yourself, are probably familiar with at least one of Google's collaborative office tools, such as Google Sheets and/or Google Docs , that integrate seamlessly with Google Drive . Now, suppose Google announces a new collaborative office tool at Google I/O. If you decide to use this tool, then you may notice how much faster it takes for you to learn this tool, along with its shortcuts and tricks, because the interface contains features that you have previously interacted with in other tools. Across each tool, the appearance of these features, such as the editing toolbar and sharing dialog, remains consistent since they draw upon the same set of foundational elements, controls, colors, typography, animations, etc. By building a component library, we can centralize all reusable components at one location and access these components from any of our products. Furthermore, pairing a component library with a design system unifies every product within the platform under a singular brand identity. For distributed teams that work on products independently of one another, this allows teams to follow the same design principles/philosophies/patterns, share code and create components in isolation. Writing components in an environment outside of our application makes them flexible and adaptable to any specific layout requirements. This way, the component's design can account for both known and unforeseen use cases. Anytime designers, developers and product managers contribute to the library and update the components, those changes immediately propagate down to the products using those components. With so many different stakeholders involved, we need to build durable components that are thoroughly tested and well-documented. A popular open-source tool for organizing and building components in isolation is Storybook , which comes with a sandbox for previewing/demoing components and mocking use cases to capture different states of a component in stories . Documenting these use cases as stories especially helps in onboarding new team members. Storybook has integrations with different front-end libraries/frameworks: React , Vue , Angular , Svelte , etc. Additionally, if you need functionality that Storybook doesn't already provide, then you can create addons to extend Storybook. Below, I'm going to show you how to add Storybook to your component library. To get started, clone the following repository: This repository contains a component library with customizable D3 visualizations written in React and TypeScript. Currently, this repository only has one visualization component, a scatterplot. Inside of the project directory, install the dependencies: For this tutorial, we will be adding Storybook to this library and writing stories for its scatterplot component. Add Storybook to the library with Storybook CLI. As of Storybook v6.0, Storybook CLI automatically detects whether the project is TypeScript-based and configures Storybook to support TypeScript without any additional configuration. This command creates the following directories and files within the project: Within package.json , several Storybook dependencies are now listed under devDependencies . Along with these new dependencies, several NPM scripts are added for running Storybook locally and building Storybook as a static web application (to host on a cloud service and publish online). Run the storybook NPM script to run Storybook locally. This command spins up Storybook on localhost:6006 and automatically opens Storybook inside the browser. Once Storybook loads, you will be presented an introductory page that contains links to additional learning resources. Note : You can modify this page by editing the src/stories/Introduction.stories.mdx file. Each *.stories.tsx file defines a component's stories. To view a component's stories in Storybook, click on the item in the left sidebar that corresponds to the component to expand a list of its stories. For example, if you click on the "Button" item, then the canvas displays the first story listed for the <Button /> component. Rendered as an iframe, the canvas allows components to be tested in isolation. Altogether, four stories appear beneath the "Button" item in the sidebar: Each story describes how certain parameters affect the rendering of the component. To understand what this means, let's look inside the <Button /> component's source and story files. ( src/stories/Button.tsx ) ( src/stories/Button.stories.tsx ) Template is a function that accepts args and uses them to render the component. It serves as a template for defining a story. To keep the example simple, args represents props that are passed directly to the component, but they can be modified inside of the Template function for more complex examples. Each story makes a new copy of this template via Template.bind({}) to set its own properties. To specify a story's args , define an args property on the story's copy of the template function, and assign this property an object with the values needed for rendering the component. For example, the story named Primary renders the <Button /> component with the props { primary: true, label: "Button" } (passed directly to the component within the story's Template function via args ). This adds the storybook-button--primary CSS class to the <button /> element and sets its text to "Button." If you want to experiment with different prop values, then adjust the props within the "Controls" panel below the canvas. Only props of primitive types, such as booleans and strings, are dynamically editable. When you enter "red" into the backgroundColor input field, the button's background color changes to red. If you switch from "Controls" to "Actions," then you can see logs of event handlers executed as a result of user interactions. For example, the <Button /> component receives an onClick prop that attaches to its <button /> element. When you click the button in Storybook, the panel will print information about that onClick event. Everything mentioned above also applies to the stories Secondary , Large and Small . If you press the "Docs" tab, then Storybook shows information about the <Button /> component, such as prop descriptions, the code required to render the component shown in a story, etc. The prop descriptions come from inline comments written in the props' exported TypeScript interface. First, let's remove the example stories created during the initialization process. Next, let's recreate the src/stories/Introduction.stories.mdx file with the contents of the library's README.md file. ( src/stories/Introduction.stories.mdx ) @storybook/addon-docs/blocks provides the building blocks for writing documentation pages. For now, the introductory page will have a webpage title of "Example/Introduction" and will render the extracted contents of the README file to both "Canvas" and "Docs." Now, let's write some stories for our library's <Scatterplot /> component. Create a src/stories/Scatterplot.stories.tsx file. Inside of this file, add stories to reflect the following basic use cases for the <Scatterplot /> component: For all of the stories to access data fetched from a remote source, we must set a global loader, which runs before the rendering of the stories, inside of the .storybook/preview.js file. ( .storybook/preview.js ) Here, scatterplotData contains the fetched and processed Iris data, which will be available to the stories of the <Scatterplot /> component. scatterplotData can be accessed by any story template in the project via the template's second argument, the story context, which has a loaded property for accessing loader data. Back to the src/stories/Scatterplot.stories.tsx file, import the Story and Meta types from @storybook/react and export an object with metadata about the component. The pages of the component's stories will be prefixed with the webpage title "Example/Scatterplot." ( src/stories/Scatterplot.stories.tsx ) Define an object ( BASE_ARGS ) with template arguments shared by all of the stories. In this case, each story's scatterplot will have the same dimensions ( dimensions ) and render with the same axes' labels and data ( labels , xAccessorKey and yAccessorKey ). ( src/stories/Scatterplot.stories.tsx ) Write a template function that renders the <Scatterplot /> component based on args set for each story. Since the default value of data is an empty array, we must explicitly check for a flag isEmpty , which will notify the template function to use this empty array only for the "Empty" story. For the other stories, use the scatterplot data fetched by the global loader. ( src/stories/Scatterplot.stories.tsx ) Note : loaded is undefined when accessing the component's docspage. Unfortunately, all of the inline-rendered stories will be empty because loaders are experimental and not yet compatible with inline-rendered stories in Storybook Docs . Write the stories. For the "Default" story, just use the base arguments. For the "Empty" story, make sure to notify the template function to use the default empty data array. For the "Legend" story, set two additional fields to args : one for categoryKey (a key to access a record's category) and another for categoryColors (a list of colors to visually differentiate categories). ( src/stories/Scatterplot.stories.tsx ) Altogether... ( src/stories/Scatterplot.stories.tsx ) Default Story Empty Story Legend Story <Scatterplot /> Component's DocsPage For a final version of this tutorial, check out the GitHub repository here . Try integrating Storybook into your own component library!

Thumbnail Image of Tutorial Scaffolding a React Component Library with Storybook (Using TypeScript)

Visualizing Geographic SQL Data on Google Maps

Analytics dashboards display different data visualizations to represent and convey data in ways that allow users to quickly digest and analyze information. Most multivariate datasets consumed by dashboards include a spatial field/s, such as an observation's set of coordinates (latitude and longitude). Plotting this data on a map visualization contextualizes the data within a real-world setting and sheds light on spatial patterns that would otherwise be hidden in the data. Particularly, seeing the distribution of your data across an area connects it to geographical features and area-specific data (i.e., neighborhood/community demographics) available from open data portals. The earliest example of this is the 1854 cholera visualization by John Snow , who marked cholera cases on a map of London's Soho and uncovered the source of the cholera outbreak by noticing a cluster of cases around a water pump. This discovery helped to correctly identify cholera as a waterborne disease and not as an airbourne disease. Ultimately, it changed how we think about disease transmission and the impact our surroundings and environment have on our health. If your data consists of spatial field/s, then you too can apply the simple technique of plotting markers on a map to extrapolate valuable insight from your own data. Map visualizations are eye-catching and take on many forms: heatmaps, choropleth maps, flow maps, spider maps, etc. Although colorful and aesthetically pleasing, these visualizations provide intuitive controls for users to navigate through their data with little effort. To create a map visualization, many popular libraries (e.g., Google Maps API and deck.gl ) support drawing shapes, adding markers and overlaying geospatial visualization layers on top of a set of base map tiles. Each layer generates a pre-defined visualization based on a collection of data. It associates each data point with certain attributes (color, size, etc.) and renders them on to a map. By pairing a map visualization library with React.js, developers can build dynamic map visualizations and embed them into an analytics dashboard. If the visualizations' data comes from a PostgreSQL database, then we can make use of PostGIS geospatial functions to help answer interesting questions related to spatial relationships, such as which data points lie within a 1 km. radius of a specific set of coordinates. Below, I'm going to show you how to visualize geographic data queried from a PostgreSQL database on Google Maps. This tutorial will involve React.js and the @react-google-maps/api library, which contains React.js bindings and hooks to the Google Maps API, to create a map visualization that shows the location of data points. To get started, clone the following two repositories: The first repository contains a Create React App with TypeScript client-side application that displays a query builder for composing and sending queries and a table for presenting the fetched data. The second repository contains a multi-container Docker application that consists of an Express.js API, a PostgreSQL database and pgAdmin. The Express.js API connects to the PostgreSQL database, which contains a single table named cp_squirrels seeded with 2018 Central Park Squirrel Census data from the NYC Open Data portal. Each record in this dataset represents a sighting of an eastern gray squirrel in New York City's Central Park in the year 2018. When a request is sent to the API endpoint POST /api/records , the API processes the query attached as the body of the request and constructs a SQL statement from it. The pg client executes the SQL statement against the PostgreSQL database, and the API sends back the result in the response. Once it receives this response, the client renders the data to the table. To run the client-side application, execute the following commands within the root of the project's directory: Inside of your browser, visit this application at http://localhost:3000/ . Before running the server-side application, add a .env.development file with the following environment variables within the root of the project's directory: ( .env.development ) To run the server-side application, execute the following commands within the root of the project's directory: Currently, the client-side application only displays the data within a table. For it to display the data within a map visualization, we will need to install several NPM packages: The Google Maps API requires an API key, which tracks your map usage. It provides a free quota of Google Map queries, but once you exceed the quota, you will be billed for the excessive usage. Without a valid API key, Google Maps fails to load: The process of generating an API key involves a good number of steps, but it should be straight-forward. First, navigate to your Google Cloud dashboard and create a new project. Let's name the project "react-google-maps-sql-viz." Once the project is created, select this project as the current project in the notifications pop-up. This reloads the dashboard with this project now selected as the current project. Now click on the "+ Enable APIs and Services" button. Within the API library page, click on the "Maps JavaScript API" option. Enable the Maps JavaScript API. Once enabled, the dashboard redirects you to the metrics page of the Maps JavaScript API. Click the "Credentials" option in the left sidebar. Within the "Credentials" page, click the "Credentials in APIs & Services" link. Because this is a new project, there should be zero credentials listed. Click the "+ Create Credentials" button, and within the pop-up dropdown, click the "API key" option. This will generate an API key with default settings. Copy the API key to your clipboard and close the modal. Click on the pencil icon to rename the API key and restrict it to our client-side application. Rename API key to "Google Maps API Key - Development." This key will be reserved for local development and usage metrics recorded during local development will be tied to this single key. Under the "Application Restrictions" section, select the "HTTP referrers (web sites)" option. Below, the "Website restrictions" section appears. Click the "Add an Item" button and enter the referrer " http://localhost:3000/* " as a new item. This ensures our API key can only be used by applications running on http://localhost:3000/ . This key will be invalid for other applications. Finally, under the "API Restrictions" -> "Restrict Key" section, select the "Maps JavaScript API" option in the <select /> element for this key to only allow access to the Google Maps API. All other APIs are off limits. After you finish making these changes, press the "Save" button. Note: Press the "Regenerate Key" button if the API key is compromised or accidentally leaked in a public repository, etc. The dashboard redirects you back to the "API & Services" page, which now displays the updated API key information. Also, don't forget to enable billing! Otherwise, the map tiles fail to load: When you create a billing account and link the project to the billing account, you must provide a valid credit/debit card. When running the client-side application in different environments, each environment supplies a different set of environment variables to the application. For example, if you decide to deploy this client-side application live to production, then you would provide a different API key than the one used for local development. The API key used for local development comes with its own set of restrictions, such as only being valid for applications running on http://localhost:3000/ , and collects metrics specific to local development. For local development, let's create a .env file at the root of the client-side application's project directory. For environment variables to be accessible by Create React App, they must be prefixed with REACT_APP . Therefore, let's name the API key's environment variable REACT_APP_GOOGLE_MAPS_API_KEY , and set it to the API key copied to the clipboard. Let's start off by adding a map to our client-side application. First, import the following components and hooks from the @react-google-maps/api library: ( src/App.tsx ) Let's destructure out the API key's environment variable from process.env : ( src/App.tsx ) Establish where the map will center. Because our dataset focuses on squirrels within New York City's Central Park, let's center the map at Central Park. We will be adding a marker labeled "Central Park" at this location. ( src/App.tsx ) Within the <App /> functional component, let's declare a state variable that will hold an instance of our map in-memory. For now, it will be unused. ( src/App.tsx ) Call the useJsApiLoader hook with the API key and an ID that's set as an attribute of the Google Maps API <script /> tag. Once the API has loaded, isLoaded will be set to true , and we can then render the <GoogleMap /> component. ( src/App.tsx ) Currently, TypeScript doesn't know what the type of our environment variable is. TypeScript expects the googleMapsApiKey option to be set to a string, but it has no idea if the REACT_APP_GOOGLE_MAPS_API_KEY environment variable is a string or not. Under the NodeJS namespace, define the type of this environment variable as a string within the ProcessEnv interface. ( src/react-app-env.d.ts ) Beneath the table, render the map. Only render the map once the Google Maps API has finished loading. Pass the following props to the <GoogleMap /> component: Here, we set the center of the map to Central Park and set the zoom level to 14. Within the map, add a marker at Central Park, which will physically mark the center of the map. ( src/App.tsx ) The onLoad function will set the map instance in state while the onUnmount function will wipe the map instance from state. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Within your browser, visit the application at http://localhost:3000/ . When the application loads, a map is rendered below the empty table. At the center of this map is marker, and when you hover over this marker, the mouseover text shown will be "Central Park." Suppose we send a query requesting for all squirrel observations that involved a squirrel with gray colored fur. When we display these observations as rows within a table, answering questions like "Which section of Central Park had the most observations of squirrels with gray colored fur?" becomes difficult. However, if we populate the map with markers of these observations, then answering this question becomes easy because we will be able to see where the markers are located and identify clusters of markers. First, let's import the <InfoWindow /> component from the @react-google-maps/api library. Each <Marker /> component will have an InfoWindow, which displays content in a pop-up window (in this case, it acts as a marker's tooltip), and it will only be shown only when the user clicks on a marker. ( src/App.tsx ) Since each observation ("record") will be rendered as a marker within the map, let's add a Record interface that defines the shape of the data representing these observations mapped to <Marker /> components. ( src/App.tsx ) We only want one InfoWindow to be opened at any given time. Therefore, we will need a state variable to store an ID of the currently opened InfoWindow. ( src/App.tsx ) Map each observation to a <Marker /> component. Each <Marker /> component has a corresponding <InfoWindow /> component. When a marker is clicked on by the user, the marker's corresponding InfoWindow appears with information about the color of the squirrel's fur for that single observation. Since every observation has a unique ID, only one InfoWindow will be shown at any given time. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Within the query builder, add a new rule by clicking the "+Rule" button. Set this rule's field to "Primary Fur Color" and enter "Gray" into the value editor. Keep the operator as the default "=" sign. When this query is sent to the Express.js API's POST /api/records endpoint, it produces the condition primary_fur_color = 'Gray' for the SQL statement's WHERE clause and will fetch all of the observations involving squirrels with gray-colored fur. Press the "Send Query" button. Due to the high number of records returned by the API in the response, the browser may freeze temporarily to render all the rows in the table and markers in the map. Once the browser finishes rendering these items, notice how there are many markers on the map and no discernable spatial patterns in the observations. Yike! For large datasets, rendering a marker for each individual observation causes massive performance issues. To avoid these issues, let's make several adjustments: Define a limit on the number of rows that can be added to the table. ( src/App.tsx ) Add a state variable to track the number of rows displayed in the table. Initialize it to five rows. ( src/App.tsx ) Anytime new data is fetched from the API as a result of a new query, reset the number of rows displayed in the table back to five rows. ( src/App.tsx ) Using the slice method, we can limit the number of rows displayed in the table. It is increased by five each time the user clicks the "Load 5 More Records" button. This button disappears once all of the rows are displayed. ( src/App.tsx ) To render a heatmap layer, import the <HeatmapLayer /> component and tell the Google Maps API to load the visualization library . For the libraries option to be set to LIBRARIES , TypeScript must be reassured that LIBRARIES will only contain specific library names. Therefore, import the Libraries type from @react-google-maps/api/dist/utils/make-load-script-url and annotate LIBRARIES with this type. ( src/App.tsx ) ( src/App.tsx ) ( src/App.tsx ) ( src/App.tsx ) Pass a list of the observations' coordinate points to the <HeatmapLayer /> component's data prop. ( src/App.tsx ) Altogether, here's how your src/App.tsx should look after making the above modifications. ( src/App.tsx ) Save the changes and re-enter the same query into the query builder. Now the table displays information only the first five observations of the fetched data, and the heatmap visualization clearly distinguishes the areas with no observations and the areas with many observations. Click here for the final version of this project. Click here for the final version of this project styled with Tailwind CSS . Try visualizing the data with other Google Maps layers.

Thumbnail Image of Tutorial Visualizing Geographic SQL Data on Google Maps

React Query Builder - The Ultimate Querying Interface

From businesses looking to optimize their operations, data influences the decisions being made. For scientists looking to validate their hypotheses, data influences the conclusions being arrived at. Regardless, the sheer amount of data collected and harnessed from various sources presents the challenge of identifying rising trends and interesting patterns hidden within this data. If the data is stored within an SQL database, such as PostgreSQL , querying data with the expressive power of the SQL language unlocks the data's underlying value. Creating interfaces to fully leverage the constructs of SQL in analytics dashboards can be difficult if done from scratch. With a library like React Query Builder , which contains a query builder component for fetching and exploring rows of data with the exact same query and filter rules provided by the SQL language, we can develop flexible, customizable interfaces for users to easily access data from their databases. Although there are open source, administrative tools like pgAdmin , these tools cannot be integrated directly into a custom analytics dashboard (unless embedded within an iframe). Additionally, you would need to manage more user credentials and permissions, and these tools may be considered too overwhelming or technical for users who aren't concerned with advanced features, such as a procedural language debugger, and intricate back-end and database configurations. By default, the <QueryBuilder /> component from the React Query Builder library contains a minimal set of controls only for querying data with pre-defined rules. Once the requested data is queried, this data can then be summarized by rendering it within a data visualization, such as a table or a line graph. Below, I'm going to show you how to integrate the React Query Builder library into your application to gain insights into your data. To get started, scaffold a basic React project with the Create React App and TypeScript boilerplate template. Inside of this project's root directory, install the react-querybuilder dependency: If you happen to run into the following TypeScript error... Could not find a declaration file for module 'react'. '<project-name>/node_modules/react/index.js' implicitly has an 'any' type. ... then add the "noImplicitAny": false configuration under compilerOptions inside of tsconfig.json to resolve it. React Query Builder composes a query from the rules or groups of rules set within the query builder interface. This query, in JSON form, should be sent to a server-side application that's connected to a PostgreSQL database to properly format the query into a SQL statement and execute the statement to fetch records of data from the database. For this tutorial, we will send this query to an Express.js API running within a multi-container Docker application. This application also runs a PostgreSQL database and the pgAdmin in separate containers. The API connects to the PostgreSQL database and defines a POST route for processing the query. With Docker Compose, you can execute a single command to spin up all of these services at once on a single host machine! To run the entire back-end, you don't need to manually install PostgreSQL or pgAdmin on your machine; you only need Docker installed on your machine. Plus, if you decide to run other services, such as NGINX or Redis , then you can add them within the docker-compose.yml configuration file. Clone the following repository: Inside the root this cloned project, add a .env.development file with the following environment variables: To run the server-side application, execute the following command: This command starts up the server-side application. When you re-build and restart the application with this same command, it will do so from scratch with the latest images. It's up to you if you want to leverage caching to expedite the build and start up processes. Nevertheless, let's break down what this command does: For each docker-compose command, pass a set of environment variables via the --env-file option. This approach in setting environment variables allows these variables to be accessed within the docker-compose.yml file and easily works in a CI/CD pipeline. Since the .env.<environment> files are typically not pushed to the remote repository (i.e., ignored by Git), especially for public-facing projects, when deploying this project to a cloud platform, the environment variables set within the platform's dashboard function the same way as those set by the --env-file option. The PostgreSQL database contains only one table named cp_squirrels that is seeded with 2018 Central Park Squirrel Census data downloaded from the NYC Open Data portal. Each record represents a sighting of an eastern gray squirrel in New York City's Central Park in the year 2018. Let's verify that pgAdmin is running by visiting localhost:5050 in the browser. Here, you will be presented a log-in page. Enter your credentials ( NYCSC_PGADMIN_EMAIL and NYCSC_PGADMIN_PASSWORD ) into the log-in form. On the pgAdmin welcome page, right-click on "Servers" in the "Browser" tree control (in the left pane) and in the dropdown, click Create > Server . Under "General," set the server name to nyc_squirrels . Under "Connection," set the host name to nycsc-pg-db , the container name set for our nycsc-pg-db . It is where our PostgreSQL database is virtually hosted at on our local machine. Set the username and password to the values of NYCSC_PGADMIN_EMAIL and NYCSC_PGADMIN_PASSWORD respectively. Save those server configurations. Wait for pgAdmin to connect to the PostgreSQL database. Once connected, it should appear under the "Browser" tree control. Right-click on the database ( nyc_squirrels ) in the "Browser" tree control and in the dropdown, click the Query Tool option. Inside of the query editor, type a simple SQL statement to verify that the database has been properly seeded: This statement should return the first ten records of the cp_squirrels table. Let's verify that the Express.js API is running by visiting localhost:<NYCSC_API_PORT>/tables in the browser. The browser should display low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . Great! With the server-side working as intended, let's turn our attention back to integrating the React Query Builder component into the client-side application. Inside of our Create React App project's src/App.tsx file, import the <QueryBuilder /> component from the React Query Builder library. At a minimum, this component accepts two props: This is what the query builder looks like without any styling and with only these two props passed to the <QueryBuilder /> component: This probably doesn't make much sense, so let's immediately jump into a basic example to better understand the capabilities of this component. Let's make the following adjustments to the src/App.tsx file to create a very basic query builder: Open the application within your browser. The following three element component is shown in the browser: The first element is the combinator selector , which is a <select /> element that contains two options: AND and OR . These options correspond to the AND and OR operators of a SQL statement's WHERE clause. The second element is the add rule action , which is a <button /> element ( +Rule ) that when pressed will add a rule. If you press this button, then a new rule is rendered beneath the initial query builder component: A rule consists of a field , an operator and a value editor , and it corresponds to a condition specified in a SQL statement's WHERE clause. The field <select /> element lists all of the fields passed into the fields prop. Notice that the label of the field is shown in this element. The operator <select /> element lists all of the possible comparison/logical operators that can be used in a condition. Lastly, the value editor <input /> element contains what the field will be compared to. For example, if we type -73.9561344937861 into the <input /> field, then the condition that will be specified in the WHERE clause is X = -73.9561344937861 . Basically, this will fetch all squirrel sightings located at the longitudinal value of -73.9561344937861 . With only one rule, the combinator selector is not applicable. However, if we press the add rule action button again, another rule will be rendered, and the combinator selector will become applicable. With two rules, two conditions are specified and combined with the AND operator: X = -73.9561344937861 AND Y = 40.7940823884086 . The third element is the add group action , which is a <button /> element ( +Group ) that when pressed will add an empty group of rules. If you press this button, then a new group is rendered beneath whatever has already been rendered in the query builder component: Currently, there are no rules within the newly created group. When we add two new rules to this group by pressing its add rule action button twice and change the value of its combinator selector to OR , like so: The two rules within this new group are combined together similar to placing parentheses around certain conditions in a WHERE clause to give a higher priority to them during evaluation. For the above case, the overall condition specified to the WHERE clause would be X = -73.9561344937861 AND Y = 40.7940823884086 AND (X = -73.9688574691102 OR Y = 40.7837825208444) . A total of eight fields are defined. Essentially, they are based on the columns of the cp_squirrels table. For each field, the name property corresponds to the actual column name, and the label property corresponds a more presentable column title that is shown in the field <select /> element of each rule. If you look into developer tools console, then you will see many query objects logged to the console: Every single action performed on the query builder that changes the query will invoke the logQuery function, which prints the query to the console. If we import the formatQuery function from the react-querybuilder library and call it inside of logQuery with the query, then we can format the query in many different ways. For now, let's format the query to a SQL WHERE clause: ( src/App.tsx ) If we modify any of the controls' values, then both the query (in its raw object form) and its formatted string (as a condition of a WHERE clause) are printed to the console: With the fundamentals out of the way, let's focus on sending the query to our Express.js API to fetch data from our PostgreSQL database. Inside of src/App.tsx , let's add a "Send Query" button below the <QueryBuilder /> component: Note : The underscore prefix of the _evt argument indicates an unused argument. When the user clicks this button, the client will send the most recent query to the /api/records endpoint of the Express.js API. This endpoint takes the query, formats it into a SQL statement, executes this SQL statement and responds back with the result table. We will need to store the query inside a state variable to allow other functions, such as , within the <App /> component to access the query. This changes our uncontrolled component to a controlled component . ( src/App.tsx ) Anytime onQueryChange is invoked, the setUpdateQuery method will update the value of the updateQuery variable, which must adhere to the type RuleGroupType . Update the sendQuery function to send updateQuery to the /api/records endpoint and log the data in the response. ( src/App.tsx ) Inside of the query builder, if we want retrieve squirrel sightings found at the coordinates (40.7940823884086, -73.9561344937861), then create two rules: one for X (longitude) and one for Y (latitude). When we press the "Send Query" button, the result table (in JSON) is printed to the console: Only one squirrel sighting was observed at that particular set of coordinates. Let's display the result table in a simple table: ( src/App.tsx ) Press the "Send Query" button again. The result table (with only one record) should be displayed within a table. The best part is you can add other visualization components to display your fetched data. The sky's the limit! Click here for the final version of this project. Visit the React Query Builder to learn more about how you can customize it to your application's needs.

Thumbnail Image of Tutorial React Query Builder - The Ultimate Querying Interface

How to Test Your First React Hook Using Testing Library

In previous posts we learned: In this post, we're going to write tests for a custom React hook. We're going to create a custom hook that will uppercase a given text string. This hook will be connected to an input and whenever a user changes the input value this hook will automatically uppercase the value. Let's write the hook. Create a file called useUppercase.ts In this file, we create a custom hook function useUppercase() . It accepts a string as an initial argument. We transform this string using .toUpperCase() and then store the result value in a local state. Also, we create a function update() . This function will be a part of the public API. With this function, components will be able to update the current value. Finally, we use useEffect() to re-render the hook if the initialValue gets changed. This, for example, happens when the component gets new props and passes the new initial value to the hook. In this case, we need to update the local state with the new value from the component. Now, let's test the hook! To test hooks, we're going to use React Hooks Testing Library . It gives us utilities for testing hooks without even rendering a single component. Let's install it: Now, we create a file useUppercase.test.ts . In this file, we import our hook and describe our first test: The first thing we want to test is that the hook returns a given initial value in the upper case. With React Hooks Testing Library, we don't need to render a whole component, instead, we render the hook itself with a special function called renderHook() : The renderHook() function returns an object with the result field. We can access the current render result with the current field. It contains the object our hook returns. We can use autosuggestions to select a field to test against: The coolest feature is readability. We don't need to create extra infrastructure code to test hooks in isolation. It just works! Another thing we want to test is that when we call update with a new string, the hook's value updates. Let's write another test: The current render result contains a reference to our update() method, so we can use it to simulate a change. Notice that we need to use act() . It is required for the hook to update the values inside of it. According to the documentation : The last thing to cover is a case when the hook gets a new initialValue . In this case, we need to update its values and re-render the component that is using them. In this test, we access not only the result but also the rerender() method. This method renders the hook with another initial value. Also, this time we use props to pass initial values. They help us to keep the code shorter and more readable. In general, we pass the initialProps object to options of the render() method. When we need to re-render the hook, we can pass updated values for this object. The library infers the type of the object so we can use autosuggestions here as well:

Thumbnail Image of Tutorial How to Test Your First React Hook Using Testing Library

How to Write Your First Component Test in React + TypeScript App

In the previous post , we created a unit test for a function. In this post, we're going to create a unit test for a component using @testing-library/react . Since we're using Create React App the React Testing Library is already installed in our dependencies. If you don't have it, install the package using: We will create a Greetings component with the greetings text inside and a button for sending friendly waves 😊 Let's create and review a component first: This component takes a name and an onSendWaves function as props. The name will be rendered in the greetings text, and the callback function will be called whenever the button is pressed. The button can be hidden if the callback function is not provided. We need to test 3 things here: Let's start with the first one. For the first test, let's check if the greetings text is rendered correctly. Let's break the test code down a bit. In describe and it we explain the assumption we want to test as we did in the previous post . Then, we use the render method from the @testing-library/react package. This method renders the component provided in arguments and appends it to the body of the document. Once we're rendered the component we can use the screen object to get access to the render result. It provides a set of queries for us to find the required element. If you saw tests with RTL before, you might have seen this: We use getByText query to find an element that contains the given text. Notice that React Testing Library doesn't focus on the component implementation. We don't explicitly define where to look for that text, instead, we tell what to find and RTL will try to find it for us. Finally, we check that the required element is indeed in the document. We can select elements not only by the text they contain. In fact, there are quite a few ways to query elements . Let's review a couple of types of queries : Another cool (and a bit confusing at first) feature is the query type. Until now we only saw a getBy query. It searches for a single element and returns it if found and throws an error if didn't find. There are also other types of queries for different types of searches : The queryBy and queryAllBy are usually used to make sure that elements are not in the document. The findBy and findAllBy are used to search for elements that are not available at first render. It is hard to pick a query at first, that's why there is a priority guideline in the docs and the cheatsheet to help us. Now, let's test that when the onSendWaves is provided the button gets rendered: Here, we use getByRole query because we need to find a single button element. React Testing Library has the fireEvent object that can help with simulating browser events, however, it is more convenient to use the user-event library since it provides more advanced simulation of browser interactions. Let's install it with: Now let's test that when the button is pressed it fires the callback: First of all, we use jest.fn method to create a mock function . This mock will help us to test how many times and with what arguments this function has been called. On the last line, we check that the mock instance ( onSendWavesMock ) is called at least once with a given text. Another text will result in the failing test. To click the button we use the userEvent object from React Testing Library. It provides most common actions like click, input changes, and so on. In our case, we need to test the click action, so we use the click method and pass the element that should be pressed. In the next post, we will test react hooks with @testing-library/react-hook .