Tutorials on Javascript

Learn about Javascript from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Custom Annotations on XY Line Charts with visx and React

In this article, we will learn how to build an XY line chart with custom annotations using visx and React.visx is a library built and maintained by the folks at AirBnB , that provides low-level visualization primitives for React. It is a thin wrapper around D3.js and is infinitely customizable for any of your data visualization needs. visx provides React components encapsulating D3 constructs, taking away some of the complexities and learning curve involved in working with D3. In this tutorial, we will learn how to use custom annotations to enrich and add context to your line charts using visx and React. We will be charting Apple Inc.’s (AAPL) stock price over the last ten years and overlaying it with annotations for different product launch dates. This will help us understand how the stock price was affected by various important launches in the company’s history. Let us start by creating a stock standard React TypeScript app using create-react-app . We can then install the @visx/xychart library which we need for this tutorial, along with date-fns which we will use for date manipulation. In this tutorial, we will use historical stock price data for Apple (AAPL) from Kaggle. I’ve transformed the raw CSV data into JSON and simplified it to have just two main properties per data point - the x property representing the date and the y property representing the closing stock price at that date. I have also curated an additional dataset containing dates for important Apple product launches and company events in the last ten years. This has been combined with the stock price data - some of the data points have an additional events property which describes the events that occurred around the time as an array of strings. The data can be found in the GitHub repo for this tutorial . Let us use the components from the @visx/xychart library that we installed earlier to create a simple plot using the first dataset from step 2. Let us take a closer look at the different components used in the chart: When the Chart component is instantiated in App.tsx , your app should look somewhat like this: Now that we have a basic chart up and running, we can use the additional data in the events properties to add custom annotations to the chart. This can be done using the Annotation component from @visx/xychart . labelXOffset and labelYOffset are pixel values that indicate how far away the annotation needs to be from the data point it is associated with - this prevents the annotation completely overlapping and obscuring the point in question. We've filtered out the data points from stockPrices that have the events property, and added an annotation for each one that has events. Each annotation has a label that displays the date and all the events for that date. The label is attached to the data point using an AnnotationConnector . With the annotations added, your chart should now look like this: The annotations help provide a better picture of the company over the years, and can offer possible explanations for the variations in share price (do note, however, that correlation does not necessarily imply causation 😉). In this tutorial, we have used the example of Apple's share price variations to understand how to plot an XY chart with custom annotations with visx and React. There are a number of improvements that can be made to the chart, including: You can read more about the XY Chart in the official docs . As always, all the code used in this tutorial is available on GitHub .

Thumbnail Image of Tutorial Custom Annotations on XY Line Charts with visx and React

Build Your Own JavaScript Micro-Library Using Web Components: Part 4 of 4

In this capstone tutorial, we're going to actually use the micro-library in app code so you can see how the micro-library makes things easier for developers in real world development. In the previous steps of this 4-part tutorial, this is what we accomplished: In this final tutorial, we will now refactor an example component to use the @Component decorator and the attachShadow function from our micro-library. We're refactoring a file, packages/component/src/card/Card.ts , which contains the CardComponent class. This is a regular Web Components custom element. To get it to use our micro-library, we first import Component and attachShadow from our micro-library. Next, we add the Component decorator to CardComponent . We remove the line at the bottom of the file that registers the component, noting the tag name in-card . Remove customElements.define('in-card', CardComponent); . The above code is now automated by our micro-library. We set the selector property to the ElementMeta passed into Component to in-card , the same string originally used to register the component. Next, we move the content of the style tag in the constructor to the new style property on ElementMeta . We do the same for the template of CardComponent . We migrate the HTML to the new template property until the ElementMeta is filled in. Next, we remove everything in the constructor and replace it with a call to our micro-library's attachShadow function, passing in this to the first argument. This automates Shadow DOM setup. To make sure everything is working properly, this is where we start up the development server and observe the changes in the browser. Nothing should have changed about the user interface. Everything should appear the same. Our CardComponent has now been successfully refactored to use the micro-library's utilities, eliminating boilerplate and making the actual component code easier to reason about. That completes this 4-part tutorial series on building a micro-library for developing with Web Components. Our micro-library supports autonomous and form-associated custom elements. It enables developers to automate custom element setup as well as Shadow DOM setup, so they can focus on the unique functionality of their components. In the long run, these efficiencies add up to a lot of saved time and cognitive effort. If you want to dive more into ways to build long-lived web apps that use Web Components and avoid lock-in into specific JavaScript frameworks, check out Fullstack Web Components: Complete Guide to Building UI Libraries with Web Components.

Thumbnail Image of Tutorial Build Your Own JavaScript Micro-Library Using Web Components: Part 4 of 4

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Build Your Own JavaScript Micro-Library Using Web Components: Part 3 of 4

Here is Part 3/4 of our tutorial series on building a JavaScript micro-library for creating your apps with Web Components. As I pointed out in previous lessons, the micro-library eases the path to development with Web Components, automating a lot of the work so developers can build their apps faster. Here's what we covered so far: Now in this tutorial, Part 3, we will automate another piece of functionality for classes that use our decorator. In this case, we'll automatically attach a Shadow DOM to those classes so that the user of the library does not have to manually create a Shadow DOM for their custom elements. Now that we have ElementMeta stored on the prototype of any class using the Component decorator, our next step is to write a reusable function that'll be used in the constructor of the same class to instantiate the Shadow DOM. By abstracting this logic to a reusable function , we'll reduce several lines of code in each component implementation down to one line. Basically, we want to take something like this... ...and reduce it to one line. The first argument of attachShadow is the instance of the class which, in the constructor , you can reference as this . The second argument is the Object that configures the call to element.attachShadow . You can read more about element.attachShadow on MDN . To start development of this new function , make a new directory named template in packages/common/src and create a new file in that directory named index.ts . Create another file in the directory, named shadow.ts . In packages/common/src/template/shadow.ts , create a new function named attachShadow and export it. Declare two arguments for attachShadow : context and options . Make options optional with ? and type define context as any , and options as ShadowRootInit , a type definition exported from lib.dom.d.ts . Follow up in packages/common/src/template/index.ts and ensure attachShadow is exported for the main index.ts . Finally, in packages/common/index.ts , export the attachShadow function. Jumping back to packages/common/src/template/shadow.ts , fill in the algorithm for attachShadow . Make a const named shadowRoot , type defined as ShadowRoot , equal to context.attachShadow . On the next line, make a const named template , equal to document.createElement('template') . This line creates a new HTML template. Set the content of the HTML template using the ElementMeta stored on the prototype of whatever class will use this attachShadow function. Pass in context.elementMeta.style to a style tag and context.elementMeta.template afterward. Finally, append a clone of the HTML template to the ShadowRoot . When you are finished, the attachShadow function should look like this: With Component and attachShadow now supporting autonomous and form-associated custom elements, you can now use the new decorator pattern in actual components. Build the @in/common package again so files inside the @in/ui package can pick up the latest changes. We're almost done building this Web Components micro-library, though there's a lot more features you could add. In the final lesson in building our micro-library, we'll refactor some example components to use the micro-library so you can see how end developers actually use the library. For more about building UI Libraries using Web Components, check out our latest book Fullstack Web Components: Complete Guide to Building UI Libraries with Web Components.

Thumbnail Image of Tutorial Build Your Own JavaScript Micro-Library Using Web Components: Part 3 of 4

Build Your Own JavaScript Micro-Library Using Web Components: Part 1

If you've ever wondered how libraries like React , Preact , or Svelte work under the hood, this is a great exploration of what you need to know. Using Web Components means that your own micro-library, which we build in this series, will work easily with any JavaScript codebase. This achieves greater levels of code reuse. Let's dive in. When building with Web Components, you will rely heavily on the set of specifications that make up Web Components: The main benefits Web Components bring to the table: reuse, interoperable, accessible, having a long lifespan, are due to their reliance on browser specifications. Had we adopted a library or framework, we might have lost some or all of these characteristics in the user interfaces built with it. UI components coded with some libraries aren't interoperable with other JavaScript libraries or frameworks, which puts a hard limit on reuse. Even though we gain these benefits from Web Components, we have lost any benefit JavaScript libraries have to offer. Frameworks and libraries like React, Vue, Angular, and Svelte provide an abstraction around browser specifications. React, for example, famously opted for a purely functional approach, giving engineers "hooks" to manage side effects in user interfaces. JavaScript libraries and frameworks provide architectural patterns that make life easier on the part of the web developer, offering features not available with browser specifications alone, like data binding and state management. What if we could retain the benefits of Web Components while also gaining the architectural design of a JavaScript framework? Indeed, there are many such Web Component libraries already. Now you get to build your own. In this 4-part tutorial series , we'll demystify the inner workings of Web Components libraries as you get to develop your own. Using TypeScript decorators, we'll develop a new interface that simplifies development but doesn't compromise on performance. The micro-library we'll code optimizes to less than 1Kb of minified JavaScript. Among its features is that it allows the components we develop to transform from something like this: ...to instead use a TypeScript decorator named Component , like this: Decorators are denoted by the @ symbol, followed by the name of the decorator function , in this case, Component . In the above example, the Component function is called with a single argument: an Object where the developer can declare the tag name, CSS style, and HTML template. These are the advantages of using class decorators to handle templates and styling: In addition to making a class decorator that allows you declare a tag name, styling, and template with a cleaner interface than before, you'll also code a method decorator that simplifies binding event listeners to custom elements. Instead of typing this.addEventListener('click', this.onClick) , what if you could decorate the onClick method and still provide the same functionality? It could look something like this: If all of this seems foreign, don't fret. Coding decorators is much like coding any JavaScript function . Providing these framework-like features to custom elements may be easier than you think. Think of a micro-library as a collection of prewritten code used for common development tasks that has a small footprint. Micro-libraries may have similar functionality as much larger libraries but with way less code that optimizes down to a few Kb or maybe even less than 1 Kb. That's why they are "micro". Micro-libraries have existed for a while. Famously, Preact is a ~3 Kb alternative to the ~45 Kb React. Micro-libraries exist, in part, to provide a more performant alternative to popular JavaScript libraries. In the context of custom elements, micro-libraries are an interesting solution because we can gain the functionality of a JavaScript library with little expense with regards to performance. In Parts 2-4 of this micro-library tutorial series, I'll show you how to identify reusable parts of Web Components code and abstract logic away from each component implementation in a functional manner. You'll code a class decorator that handles declaration of a component selector, styling and template. You'll learn how to use method decorators to attach event listeners to DOM elements. You'll have a basic Web Components micro-library you can expand upon and use in actual apps. For a deep dive into Web Components, check out our latest book -  Fullstack Web Components: Complete Guide to Building UI Libraries with Web Components . It covers how to build robust UI libraries and entire applications using Web Components. 

Thumbnail Image of Tutorial Build Your Own JavaScript Micro-Library Using Web Components: Part 1

Fullstack Web Components is now LIVE 🎉

Web Components are a standard JavaScript technology whose adoption has soared in recent years. Since it enables your components to work in any JavaScript code base , whether you are using frameworks/libraries like React, Angular, Vue, or vanilla JavaScript, you can use Web Components everywhere. Author Stephen Belovarich , Principal Software Development Engineer at Workday, unpacks practical ways to build real world apps using the latest evolution of the web spec. In Part 1 of the book , you learn the basics of Web Components and build some standard component building blocks. Part 2 walks you step by step through building a library for Web Components and leveraging the library in actual development. In Part 3 , you integrate Web Components into a full app with JavaScript on the front end as well as Node.js with Express on the backend. In the course of building these practical projects, these are some of the skills you will learn: Get hands-on experience coding UI with Web Components, but also learn how to test and maintain those components in the context of a distributed UI library in Fullstack Web Components .

Thumbnail Image of Tutorial Fullstack Web Components is now LIVE 🎉

Building an API using Firebase Functions for cheap

When I am working on personal projects, I often find the need to setup an API that serves up data to my app or webpages. I get frustrated when I end up spending too much time on hosting and environment issues. These days what I end up doing is hosting the API using Cloud Functions for Firebase . It hits all my requirements: The official name is Cloud Functions for Firebase. In this article, I am going to call it Firebase Functions. This is mostly to distinguish it from Google's other serverless functions-as-a-service: Cloud Functions. You can read more about the differences here . From that page: While I'm not going to write a mobile app in this article, I like to use Firebase Functions because: If all this isn't confusing enough, Google is rolling out a new version of Cloud Functions called 2nd generation which is in "Public Preview". So in order to move forward, let's identify our working assumptions: After all this is complete, you should have a single file called firebase.json and a directory called functions . The functions directory is where we'll write our API code. We'll take the emulator out for a spin. Congrats, you have Firebase Functions working on your local system! To exit the emulator, just type 'Ctrl-C' at your terminal window. This is all very exciting. Let's push our new "hello world" function into the cloud. From the command line type: The output should look similar, but not exactly to: And if we navigate to the Function URL we should get the 'Hello from Firebase!' message. Exciting! Do you see how easy it is to create Firebase Functions? We've done all the hard part of setting up our local environment and the Firebase project. Let's jump into creating an API using Express Install express: Next, edit the index.js file to look like: Then if you run You can load up your api locally. Note the URL link on the emulator is a little different -- it should have 'api' added at the end like: You should see our 'Hello World' message. Now for more fun, add '/testJSON' to the end of your link. You should see the browser return back JSON data that our API has sent: Now finally, let's deploy to the cloud: Note that when you try to deploy, Firebase is smart enough to detect that major changes to the URL structure have occurred. You'll need to verify that you did indeed make these changes and everything is ok. Since this is a trivial function, you can type Yes . Firebase will delete the old function we deployed earlier and create a new one. Once that completes, try to load the link and validate your API is now working! This article has walked you through the basics of using Firebase Functions to host your own API. The process of writing and creating a full featured API is beyond the scope of this article. There are many resources out there to help with this task, but I hope you'll think about Firebase Functions next time you are starting a project.

Thumbnail Image of Tutorial Building an API using Firebase Functions for cheap

Cypress Studio - the underrated feature speeding up e2e testing

Testing is basically a requirement for modern software today, not a nice-to-have. In the past, end-to-end testing was hard to set up, flaky, and generally a pain to deal with, but it's the best automated testing option to confirm software works. Cypress.io continues to improve the e2e testing experience and its new feature Cypress Studio takes it a step further to make writing tests quicker and easier too.Photo by Farzad Nazifi on Unsplash When Cypress.io first hit the scene in 2015, it made a splash because it fixed so many of the issues that existed with other end-to-end testing (e2e) competitor frameworks. Between good documentation, intuitive syntax, improved debugging, and no reliance on Selenium under-the-hood - everything about Cypress was a major step forward for e2es, but it wasn't content just to stop there. The team behind Cypress regularly keeps releasing new features and functionality to make it more and more useful for devs, and make e2e testing (traditionally kind of a pain) easier and easier as well. One recent release that's currently tucked behind a feature flag is called Cypress Studio , and it's an absolute game changer. Today, I'll show you how to add Cypress to an existing JavaScript project, enable Cypress Studio, and let it help do the heavy lifting of writing end-to-end tests. It's such a cool feature to help dev teams save time on testing, deliver new features faster, and still ensure the mission critical functionality of the app continues to work as expected. Although Cypress is kind enough to provide a host of sample scripts to show many of its features in action, it really shines with a local app to test against, and I just so happen to have one that fits the bill. The app I'm using is a React-based movie database that allows users to see upcoming movies and movies in theaters now, browse movies by genre, and search for movies by title. This will be a good app to demonstrate Cypress Studio's power. Once we've got an app to add Cypress to, the first thing we'll need to do is download Cypress to it. This is another reason Cypress stands head and shoulders above its competitors: one npm download gives you all the tools you need to start writing e2es. No dev dependencies, no extra libraries with mismatched package versions, none of that nonsense to deal with. At the root of your project, where your package.json  file lives, run the following command from the terminal: This will add a bunch of new Cypress-based folders and files to your app, and with just a few small configuration changes we'll be ready to go. See all those new folders under cypress/ ? That's what you should see after initial installation. For ease of use, I like to add npm scripts for the two main Cypress commands we'll be using: In your package.json  file, add the following two lines in your "scripts"  section. Now when we need to run the tests, a simple npm run cy:run  or npm run cy:open , straight from the command line, will do the trick.  Ok, before we get to writing our own tests, let's run the pre-populated tests in Cypress to get familiar with the its Test Runner. From your command line, run the following shell command: This should open the Cypress Test Runner, and from here, click the Run integration spec button in the top right hand corner to run through all the pre-made tests once. After all the Cypress tests run and pass, we're ready to delete them and get to work on our own tests for our app. Go ahead and clear all the files out of the Cypress folders of fixtures/ and  integration/ . You'll probably also want to add the folders of cypress/screenshots/  and cypress/videos/  to your .gitignore  file just so you don't commit those screenshots and videos that Cypress automatically takes during test runs to your GitHub repo (unless you want to, of course). Add baseUrl variable With that taken care of, let's set up a baseUrl  in our cypress.json file and enable Cypress Studio there too. Turn on experimentalStudio To enable Cypress studio, just add "experimentalStudio": true  to our cypress.json  file. So here's what the cypress.json  file will end up with. Now we can write our first test file and test. Inside of the cypress/integration/  folder in your project, create a new test file named movie-search-spec.js . This folder is where Cypress will look for all your e2e test files when it runs. Give it a placeholder test: we have to tell Cypress where we want it to record the test steps we're going to show it. So just create a typical describe  test block and inside of that, create an it  test. A good first test would be to test that a user can navigate to the movie search option, search for a particular movie name, and click into the results based on that search. Here's what my empty testing placeholder looks like in the movie-search-spec.js  file. I think we're about ready to go. One thing you must do before starting up Cypress to run tests against your local app is to start the app locally. For Cypress, it's an anti-pattern to start the app from a test, so just fire it up in a separate terminal, then open up the Cypress Test Runner. Start the movie app In one terminal run our movie app: Start the Cypress Test Runner And in a second terminal, open the Cypress Test Runner: In Cypress, Add Commands to Test When the Cypress Test Runner is open, enter our test file and click the tiny blue magic wand that says  Add Commands to Test  when you hover over it. And from here, go for it - test the app. For me, I clicked the Movie Search link in the nav bar, typed "Star Wars" into the search box, clicked into one of the results, etc. When you're satisfied with what your test is doing, click the Save Commands button at the bottom of the test, and Cypress will run back through all the commands it's just recorded from your actions. Tell me that's not cool. If you go back to your IDE now, you'll see all the actions Cypress recorded, along with a few comments to tell you it was Cypress generating the code and not a developer. This is what my test now looks like: Just wow, right? Although our test is good, Cypress can't be expected to test for all the things a developer might know are important. Things like the number of movies returned from searching "star wars" or checking the title of the movie being clicked into and the contents inside of the movie page itself. I'll fill in some of those details myself now.  If you look at my code above, I added comments after the extra assertions I added - mainly small things like checking for search text, the count of movies returned, the movie info like rating and release date in this specific movie. I didn't add a ton of extra code, just a few extra lines of details. Now run the test again and check the results. And we're done! Congrats - our first Cypress Studio-assisted e2e test is written.  Just repeat these steps for as many end-to-end tests as you need to write and prepare to be amazed at how much time it saves you. In modern software, testing is a critical piece of any solid enterprise application. It helps ensure our app continues to function as expected while adding new functionality, and end-to-end testing is the closest we can get to mimicking exactly how a user would use the app with automation. Cypress broke the mold of what e2e testing frameworks are capable of when it debuted back in 2015, and it's only continued to improve with time. My favorite new feature of late is the ability to show  Cypress how a test should act instead of writing it yourself with Cypress Studio - the time saving possibilities are immense. And more time saved means finishing features faster and getting new functionality into the hands of users quicker. Win win win. In 10 modules and 54 lessons, I cover all the things I learned while at The Home Depot, that go into building and maintaining large, mission-critical React applications - because it's so much more than just making the code work. From tooling and refactoring, to testing and design system libraries, there's a ton of material and hands-on practice here to prepare any React developer to build software that lives up to today's high standards. I hope you'll check it out.

Thumbnail Image of Tutorial Cypress Studio - the underrated  feature speeding up e2e testing

Introducing Volta - it manages your Node.js versions so you don't have to

Web development is tough enough as it is, something as mundane as mismatched versions of Node in development versus production shouldn't be another thing you have to keep in mind. Volta can prevent this sort of issue and so much more for you and your dev team automatically, and it's easy to set up to boot. Read on to get started using it yourself.Photo by Felix Mittermeier on Unsplash When you're working with a team of developers, especially on a team responsible for managing multiple applications, you very well might have JavaScript apps that run on different versions of Node.js. Some might use Node 10, others Node 12, some may use Yarn as their package manager, others might use npm - and keeping track of all that is really hard. Ensuring every developer on the team is developing with the correct versions all the time is even harder. But it's essential. While the consequences might be relatively minor during local development: it works on one dev's machine and throws an error on another's, this sort of lack of standardization and clarity can have devastating effects when it comes to production. And it could have all been avoided if we'd been using a handy little tool called Volta.  I want to introduce Volta to you today so you can avoid the stress we went through - it's simple to get started with and can prevent catastrophes like this. What this means in practice is that Volta makes managing Node, npm, yarn, or other JavaScript executables shipped as part of packages, really easy. I've told you what Volta is, but you're probably still wondering why I chose it in particular - it's certainly not the only game in town. NVM's another well known option for managing multiple versions of Node. I used to use Node Version Manager (NVM)  myself. Heck, I even wrote a whole blog post about how useful it was. NVM is good, it does exactly what it sounds like: it allows you to easily download and switch versions of Node.js on your local machine. While it does make this task simpler, NVM is not the easiest to setup initially, and, more importantly, the developer using it still has to remember themselves to switch to the correct version of Node for the project they're working on. Volta, on the other hand, is easy to install and it takes the thinking part out of the equation: once Volta's set up in a project and installed on a local machine, it will automatically switch to the proper versions of Node. Yes, you heard that right. Similar to package managers, Volta keeps track of which project (if any) you’re working on based on your current directory. The tools in your Volta toolchain automatically detect when you’re in a project that’s using a particular version of the tools and takes care of routing to the right version of the tools for you. Not only that, but it will also let you define yarn and npm versions in a project, and if the version of Node defined in a project isn't downloaded locally, Volta will go out and download the appropriate version. But when you switch to another project, Volta will defer to any presets in that project or revert back to the default environment variables. Cool, right? Ready to see it in action? For ease of getting started, let's create a brand new React application with Create React App, then we'll add Volta our local machine and our new project. First things first, create a new app. Run the following command from a terminal. Once you've got your new React app created, open up the code in an IDE, and start it up via the command line. If everything goes according to plan, you'll see the nice, rotating React logo when you open up a browser at http://localhost:3000 . Now that we've got an app, let's add Volta to it. Installing Volta to your development machine is a piece of cake - no matter your chosen operating system. Unix If you're using a Unix based system (MacOS, Linux or the Windows Subsystem for Linux  - WSL) to install Volta, it's super easy. In a terminal, run the following command: Windows If you've got Windows, it's almost this easy. Download and run the Windows installer and follow the instructions. Once Volta's finished downloading, double check it installed successfully by running this command in your terminal: Hopefully, you'll see a version for Volta like my screenshot below. If you don't try quitting your terminal completely, re-opening a new terminal session and running that command again. The current version of Volta on my machine is now v1.0.5. Before we add our Volta-specific Node and npm versions to our project, let's see what the default environment variables are. Get a baseline reading In a terminal at the root of your project, run the following line: For me, my default versions of Node and npm are v14.18.1 and v6.14.15, respectively. With our baseline established, we can switch up our versions just for this project with Volta's help. Pin a node version We'll start with Node. Since v16 is the current version of Node, let's add that to our project. Inside of our project at the root level where our package.json  file lives, run the following command. Using volta pin [JS_TOOL]@[VERSION]  will put this particular JavaScript tool at our specified version into our app's package.json . After committing this to our repo with git, any future devs using Volta to manage dependencies will be able to read this out of the repo and use the exact same version. With Volta we can be as specific or generic as want defining versions, and Volta will fill in any gaps. I specified the major Node version I wanted (16) and then Volta filled in the minor and patch versions for me. Pretty nice! When you've successfully added your Node version, you'll see the following success message in your terminal: pinned [email protected] in package.json (or whatever version you pinned). Pin an npm version That was pretty straightforward, now let's tackle our npm version. Still in the root of our project in the terminal, run this command: In this particular instance, I didn't even specify any sort of version for npm, so Volta defaults to choosing the latest LTS release to add to our project. Convenient.  The current LTS version for npm is 8, so now our project's been given npm v8.1.0 as its default version. To confirm the new JavaScript environment versions are part of our project, check the app's package.json  file. Scroll down to the bottom and you should see a new property named "volta" . Inside of the  "volta" property should be a "node": "16.11.1"  and an "npm": "8.1.0"  version. From now on, any dev who has Volta installed on their machine and pulls down this repo will have their settings for these tools automatically switch to use these particular node and npm versions. To make doubly sure, you can also re-run the first command we did before pinning our versions with Volta to see what our current development environment is now set to. After this, your terminal should tell you it's using those same versions: Node.js v16 and npm v8. Now, you can sit back and let Volta handle things for you. Just like that. 😎 If you want to see what happens when there's nothing specified for Volta (like when you're just navigating between repos or using your terminal for shell scripts), try navigating up a level from your project's root and checking your Node and npm versions again. In the screenshot below, I opened two terminals side by side: the one of the left is inside of my project with Volta versions, the one on the right is a level higher in my folder structure. I ran the following command in both: And in my project, Node v16 and npm v8 are running, but outside of the project, Node v14 and npm v6 are present. I did nothing but switch directories and Volta took care of the rest. Try and tell me this isn't cool and useful. I dare you. 😉  Building solid, stable apps is tough enough without having to also keep track of which versions of Node, yarn and npm each app runs best with. By using a tool like Volta, we can take the guesswork out of our JavaScript environment variables, and actually make it harder for a member of the dev team to use the wrong versions than the right ones. And remember to double check your local Node version matches your production server's Node version, too. In 10 modules and 54 lessons, I cover all the things I learned while at The Home Depot, that go into building and maintaining large, mission-critical React applications - because it's so much more than just making the code work. From tooling and refactoring, to testing and design system libraries, there's a ton of material and hands-on practice here to prepare any React developer to build software that lives up to today's high standards. I hope you'll check it out.

Thumbnail Image of Tutorial Introducing Volta - it manages your Node.js versions so you don't have to

NPM: What are project dependencies?

Code dependencies are like Lego's . We're able to pull in other people's code; combining and stacking different packages together to fulfill our goals. Using dependencies greatly reduces the complexity of developing software. We can take advantage of the hard work someone has already done to solve a problem so we can continue to build the projects we want. A development pipeline can have multiple kinds of code dependencies: In JavaScript, we have a package.json file that holds metadata about our project. package.json can store things like our project name, the version of our project, and any dependencies our project has. Dependencies, devDependencies, and peerDependencies are properties that can be included in a package.json file. Depending on the instance where code will be used changes the type of dependency a package is. There are packages that our users will need to run our code. A user is someone not directly working in our code-base. This could mean a person interacting with an application we wrote, or a developer writing a completely separate library. In other words, this is a production environment. Alternatively, there are packages that a developer or system only needs while working in our code. For example linters, testing frameworks, build tools, etc. Packages that a user won't need, but a developer or build system will need. Dependencies are packages our project uses in production . These get included with our code and are vital for making our application run. Whenever we install a dependency the package and any of its dependencies get downloaded onto our local hard drive. The more dependencies we add, the bigger our production code becomes. This is because each new dependency gets included in the production build of our code. Evaluate adding new dependencies unless they're needed! Dependencies are installed using npm install X or yarn add X Packages needed in development , or while developing our code, are considered dev dependencies. These are programs, libraries, and tools that assist in our development workflow. Dev dependencies also get downloaded to your local hard drive when installed, but the user will never see these dependencies. So adding a lot of dev dependencies only affects the initial yarn or npm install completion time. Dev Dependencies are installed using npm install --save-dev X or yarn add --dev X Peer dependencies are similar to dependencies except for a few key features. First, when installing a peer dependency it doesn't get added to your node_modules/ directory on your local hard drive. Why is that? Well, peer dependencies are dependencies that are needed in production , but we expect the user of our code to provide the package. The package doesn't get included in our code. This is to reduce including multiples of the same dependency in production . If every React library included a version of React as a dependency, then in production our users would download React multiple times. Peer dependencies are a tool for library owners to optimize their project size. Peer Dependencies are installed using yarn add --peer X I recently released a course, Creating React Libraries from Scratch, where we walk through content just like how NPM dependencies work, and a lot more! We start with an empty project and end the course with a fully managed library on npm. To learn more click the link below! Click to view Creating React Libraries from Scratch!

Thumbnail Image of Tutorial NPM: What are project dependencies?

Storyboarding - The right way to build apps

React Native is a platform for developing apps that can be deployed to multiple platforms, including Android and iOS, providing a native experience. In other words, write once, deploy multiple times . This tenet holds true across most aspects of app development. Take, for example, usability testing. In native development, teams would need to test business logic separately on each platform. With React Native, it only needs to be tested once. The code we write using React Native is good to go on both platforms and, in most cases, covers more than 90% of the entire code base. The React Native platform offers a plethora of options. However, knowing which to use and when comes from understanding how those pieces fit together. For example, do you even need a database, or is AsyncStorage sufficient for your use case? Once you get the hang of the ecosystem around React Native, building apps will become the easy part. The tricky parts are knowing how to set up a strong foundation that helps you build a scalable and maintainable app and using React Native the right way. If we look at the app as a product that our end users will use, we will be able to build a great experience , not just an app. Should that not be the principal aim of using a cross-platform tool? Let's try and break it down. Using React Native we are: Looking at the points above, it's clear that focusing on building our app as a product makes the most sense. Having a clear view of what we are looking to build will help us get there. It will also keep us in check that we are building the right thing, focusing on the end product and not getting lost in the technicalities or challenges of a platform. Storyboarding the app will help us achieve just that. I recommend a Storyboarding approach to build any front-end application, not just apps. This is not the typical storyboard that is created by the design teams, though the idea is similar. These storyboards can be a great way of looking at our app from a technical implementation point of view, too. This step will help us: To start, we will need to go through the wireframe design of the app. The wireframe is sufficient as we will not be focusing on colors and themes here. Next, we will go through every screen and break it down into reusable widgets and elements . The goals of this are multi-fold: For example, let's look at a general user onboarding flow: A simple and standard flow, right. Storyboarding can achieve quite a lot from the perspective of the app's development and structure. Let us see how. Visualize your app from a technical, design, and product standpoint As you will see, we have already defined eight or nine different elements and widgets here. Also, if elements like the search box, company logo, and the cart icon, need to appear on all screens, they can be put inside a Header widget. The process also helps us build consistency across the app. I would recommend building custom elements for even basic native elements like the Text element. What this does is make the app very maintainable. Say, for some reason, the designer decides to change the app's font tomorrow. If we have a custom element, changing that is practically a one-line change in the application's design system. That might sound like an edge case, but I am sure we have all experienced it. What about changing the default font size of the app or using a different font for bold texts or supporting dark mode? The Atomic Design pattern talks about breaking any view into templates, organisms, molecules, and atoms. If you have not heard about it, Atomic Design comes highly recommended, and you can read about it here . Taking a cue from the methodology, we will break down the entire development process into elements and widgets and list out all those that we will require to build the views. How do you do this? The steps are as follows: This process will help streamline the entire development process. You'll end up with a list of widgets and elements that you need to build. This list will work like a set of Lego blocks that will build the app for you. You may end up with a list like this for the e-commerce app: Looking at this list, we might decide to build a carousel widget that works like a banner carousel by passing banners as children, and as a category scroller by passing an array of category icons. If we do this exercise of defining every component for the entire app before we start building, it will improve our technical design and allow us to plan better. The process can also help iron out design inconsistencies as we will be defining all the elements, down to the most basic ones. If, for example, we were to end up with more than four of five primary buttons to define, that could indicate that we need to review the design from a user experience perspective. Source: Google Following this model will make the development approach very modular and set us up for the development phase. By now, we should also have a thorough understanding of: We also have an idea of how the layout of views will look from a technical standpoint: do we need a common header, how will transitions happen if there is animation, and so on. To summarize, we now have a wireframed plan in place that will give us a lot of confidence as we proceed with development.  To learn more about building apps with React Native, check out our new course The newline Guide to React Native for JavaScript Developer .

Thumbnail Image of Tutorial Storyboarding - The right way to build apps

How is Svelte different than React?

To get a better understanding of what Svelte brings us, it helps to step back and look at how we got here: Back in the 90s, in the original version of the web, there was only HTML. Browsers displayed static documents without any interactivity. The only way to get updated information, was by reloading the page, or navigating to a new page. In 1995, Netscape released JavaScript , making it possible to execute code on the end-user's machine. Now we could do things like: As developers began experimenting with this newfangled JavaScript thing, they found one aspect really tough: dealing with the differences between browsers. Both Netscape Navigator and Internet Explorer did things in their own way, making developers' responsible for handling those inconsistencies. The result was code like: This kind of browser detection code littered codebases everywhere. The extra branching was a nuisance, like a cognitive tax, making code harder to read and maintain. Translation: not fun. In 2006, John Resig released a compatibility layer called jQuery . It was a way to interact with the DOM without being an expert on browser feature matrices. It completely solved the inconsistency issue. No more if (isNetscape) or if (isIE) conditions! Instead, we could interact with the page using CSS selectors, and jQuery dealt with the browser on our behalf. It looked like this: But there were some challenges here too: In 2010, Google launched AngularJS 1.x , a framework that helps with state management. Instead of writing jQuery code, like: Expressions (called bindings) could be embedded directly inside the HTML: and Angular would sync those bindings for us. Later, if we change our HTML, say by switching an <h1> to an <h2> , nothing breaks with the Angular version. There's no CSS selectors to update. AngularJS components looked like this: The magic was that anytime you changed something on the $scope variable, Angular would go thru a "digestion cycle", that recursively updated all the bindings. But there were some problems here too: In 2013, Facebook launched React , a library for syncing state with UI. It solved some issues that AngularJS 1.x had. It's isomorphic, it can render HTML both on the server and in the browser, fixing the SEO problem. It also implemented a more efficient syncing algorithm called Virtual DOM . Refresher: Virtual DOM keeps a copy of the DOM in memory. It uses the copy to figure out what changes (the delta), while limiting potentially slow interactions with the browser DOM. (Though it's been pointed out that this may be overhead .) It's still conceptually similar to AngularJS, from a state management perspective. React's setState({value}) or in more recently, the useState() hook, is roughly equivalent to Angular's $scope.value = value . Hook example: React relies on developers to signal when things change. That means writing lots of Hook code. But Hooks aren't trivial to write, they come with a bunch of rules , and those rules introduce a extra cognitive load into our codebases . In 2019, Rich Harris released Svelte3. The idea behind Svelte is: What if a compiler could determine when state changes? That could save developers a lot of time. It turns out to be a really good idea . Being a compiler, Svelte can find all the places where our code changes state, and update the UI for us. Say we assign a variable inside a Svelte component: Svelte detects the let statement and starts tracking the variable. If we change it later, say year = 2021 , Svelte sees the assignment = as a state change and updates all the places in the UI that depend on that binding. Svelte is writing all the Hooks code for us! If you think about it, a big part of a developer's job is organizing state, moving state back and forth between the UI and the model. It takes effort, and it's tricky to get right. By offloading some of that work to compile-time tools, we can save a lot of time and energy . Another side effect is, we end up with less code . That makes our programs smaller, clearer to read, easier to maintain , cheaper to build, and most importantly: more fun to work with. P.S. This post is part of a new course called "Svelte for React Devs" So stay tuned!

Firebase Authentication with React

In this article, we will learn how to implement Firebase authentication in a React app.Firebase is an increasingly popular platform that enables rapid development of web and mobile apps. It offers a number of services such as a real-time database, cloud functions and authentication against various providers. In the following sections, we will build a React app that has three screens: Let's create a new project on the Firebase Console . We will call our project react-firebase-auth Once the project is created, we will navigate to the 'Authentication' tab to set up sign-in methods. For now, let us enable sign-ins with a username and password. The only other thing left to do before we start building our React app, is to add it to the Firebase project. Once we add a web app to our project, Firebase gives us the app configuration, that looks somewhat like this: We will add these configuration values into the .env file in our app. Let's use create-react-app to generate a stock React app. We'll call our app react-firebase-auth . Once this is done, we need to add a few dependencies using yarn add : In the app's .env file, we need to add the values supplied by Firebase in Step 1 like so: More information about how environment variables work can be found in the Create React App documentation . In the src folder, let us create a base.js file where we will set up the Firebase SDK. In app.js , we will wrap our app with the Router component from react-router-dom which will allow us to use routes, links and redirects. In order to store the authentication status and make it globally available within our app, we will use React's context API to create an AuthContext in src/Auth.js . We will hold the currentUser in the state using the useState hook, and add an effect that will set this variable whenever the Firebase auth state changes. We also store a pending boolean variable that will show a 'Loading' message when true. In essence, the currentUser will be null when logged out and be a defined object when the user is logged in. We can use this to build a PrivateRoute component which allows the user to access a route only when logged in. The PrivateRoute component takes a component prop which is rendered when the user is logged in. Otherwise, it redirects to the /login route. We can now use this in our app.js file after wrapping our app with the AuthProvider : Let us now implement the screens that make up our app. We can start with the simplest one, which is the home screen. All it will have is the title 'Home' and a 'Sign out' button that logs the user out of the app. This is done by calling the signOut method from Firebase's auth module. Let's now move on to the signup page, which is slightly more complex. It will have the header 'Sign up', with a form that includes text inputs for email and password, and a submit button. We wrap the component with the withRouter HOC to provide access to the history object. When the form is submitted, we will use the createUserWithEmailAndPassword method from Firebase's auth module to sign the user up. We display an alert in case something goes wrong during this process. If the user creation succeeds, we redirect the user to the home ( / ) page using the history.push API. The login page is very similar to the sign up page, with an identical-looking form. The only difference is that we will log the user in rather than create a new user when the form is submitted. The component also uses the currentUser value from our AuthContext , and will redirect to the / route when the user is logged in. And there we have it - we have just implemented a React app with sign up, login and home screens that uses Firebase authentication to register and authenticate users with their email and password. We have now learned how to implement Firebase authentication with a React app. The code used in this article is available at https://github.com/satansdeer/react-firebase-auth and a video version of this article can be found on YouTube .

Thumbnail Image of Tutorial Firebase Authentication with React

Server Side Rendering with React

In this article, we will learn how to render a simple React app using an express server.Server Side Rendering (SSR) can be a useful tool to improve your application's load time and optimise it for better discoverability by search engines. React provides full support for SSR via the react-dom/server module, and we can easily set up a Node.js based server to render our React app. In the following sections, we will create a simple counter app with React and render it server-side using an express backend. Let us use create-react-app to generate a stock React app. We'll call our app ssr-example . Let us modify the App.js file to implement a simple counter component that displays the current count. It will also render buttons with which the counter can be incremented and decremented. In our app's index.js file, we have the following line which tells React where to render our app. This needs to be slightly modified to work with SSR. To be able to render our app, we must first compile it so that an index.html and the compiled JavaScript is available. You can build the app with the following command: We will use express.js to set up a simple server for our app. You can install it using the following command in your project folder: Since the server needs to be able to render JSX, we will also need to add some babel dependencies. We will also install ignore-styles since we do not want to compile CSS. Let us create a server using the express module we have just installed. To start, create a folder called server in your project folder, and create a server.js file within it like so: We have just defined an express app that will listen on port 8000 when started. With the app.use() statement, we have also set up a handler for all requests to routes matching the ^/$ regular expression. In the next step, we will add code in the handler to render our app. But before we move on to that, we will need to configure our babel dependencies to work with the server we have just defined. To do so, create an index.js file in the server folder with the following code that imports the required dependencies, and calls @babel/register : Let us now add the code that actually renders our app. For this, we will use the fs module to access the file system and fetch the index.html file for our app. If there is an error reading the file, we will return a 500 status code with an error message. Otherwise, we can proceed with the rendering. The index.html has a placeholder element, usually a div with the ID root where it renders the React app. We will use the renderToString function from react-dom/server to render our App component as a string, and append it to the placeholder div . And that is pretty much it! We're now just one step away from being able to get this up and running. Let us add an ssr script to our package.json file to run the server. You can now start the server from your terminal with the command yarn ssr . When you navigate to http://localhost:8000 in your browser, you will see the app rendered as before. The only difference will be that the server responds back with the rendered HTML this time around. We have now learnt how to implement Server Side Rendering with a React app using a simple express server. The code used in this article is available at https://github.com/satansdeer/ssr-example , and a video version of this article can be found at https://www.youtube.com/watch?v=NwyQONeqRXA .

Thumbnail Image of Tutorial Server Side Rendering with React

Handling File Fields using React Hook Form

In this article, we will learn how to handle file uploads using react-hook-form.react-hook-form is a great library that provides an easy way to build forms with React. It provides a number of hooks that simplify the process of defining and validating various types of form fields. In the following sections, we will learn how to define a file input, and use it to upload files to firebase. We will use create-react-app to generate a stock React app. We'll call our app react-hook-form-file-input . Once the app is ready, we can navigate into the folder and install react-hook-form . Let us define our form in the app.js file. We can remove all the default markup added by create-react-app , and add a form element which contains a file input and a submit button like so: We can now connect our form element to react-hook-form using the useForm hook. useForm returns an object containing a register function which, as the name suggests, connects a given element with react-hook-form . We need to pass the register function as a ref into each element we want to connect. Now that our file input has been registered, we can handle submissions by adding an onSubmit handler to the form. The useForm hook also returns a handleSubmit function with which we will wrap our onSubmit handler. The data parameter that is passed to onSubmit is an object whose keys are the names of the fields, and values are the values of the respective fields. In our case, data will have just one key - picture , whose value will be a FileList . When we fire up the app using yarn start , we will see this on screen. On choosing a file and clicking 'Submit', details about the uploaded file will be logged to the console. Now that we have learnt how to work with file inputs using react-hook-form , let us look at a slightly more advanced use case for uploading files to Firebase . You can find the code for the Firebase file upload example here . Clone the repo and run yarn to install the dependencies. You can set up your Firebase configuration in the .firebaserc and firebase.json files at the root of the project, and log into Firebase using the CLI . As we've seen earlier, we can install react-hook-form via yarn using: The App.js file contains a file input which uploads a file to Firebase. We can now switch it over to use react-hook-form like so: We have wrapped the input element with a form , given it a name and registered it with react-hook-form . We have also moved all the logic in the onChange event handler into our onSubmit handler which receives the data from react-hook-form . We can run the app using the yarn start command and it will look the same as earlier. Choosing a file and clicking 'Submit' will upload it to Firebase. We have now learnt how to work with file inputs using react-hook-form . ✅ The code used in this article is available at https://github.com/satansdeer/react-hook-form-file-input and https://github.com/satansdeer/firebase-file-upload .

Thumbnail Image of Tutorial Handling File Fields using React Hook Form

Deep dive into the Node HTTP module

An in-depth look at the Node HTTP module, and how to use it to scale up!The HTTP module is a core module of Node.js, it is fair to say it's one of the biggest responsible for Node's initial rise in popularity. HTTP is the veins of the internet, this website, any site you explore, you use the HTTP protocol to request it from a server, and the server uses the same HTTP protocol to send you back the data you requested. Let's import the module and create a basic HTTP server with it We just created an HTTP Server object, some of its methods are: For a complete list of HTTP server class properties and methods, check out the official docs . In the example below, let's use the callback function to handle HTTP requests and respond to them. req : shorthand for request, is an object from the class   IncomingMessage  that includes all the request information. Some interesting properties are: It's also good to remember that the IncomingMessage extends the <stream.Readable> class, so each request object is, indeed, a stream. res : shorthand for "response", is an object from the class ServerResponse, which contains a collection of methods to send back an HTTP response. Some of the methods are: Each time we write response data with .write(), the chunk of data we passed gets immediately flushed to the kernel buffer. If we try to .write() after .end() has been called, we will get the following error: In an HTTP post request, we usually get a body as part of the request. How do we access it using the Node HTTP module? Remember the request object is an instance of the IncomingMessage class which extends the <stream.Readable> class, in post request we can access the body as a stream of data like this: You could use an application like Postman , to launch the HTTP request to our application, and you would end up with something like this: While you are in Postman, be sure that you are firing POST request. You will also need to set the following configuration in order to set the 'content-type' headers as 'application-json' To handle HTTPS requests, Node.js has the core module https , but first, we need some SSL certificates, for the purpose of this example let's generate some self-signed certificates In your command line (use GitBash if you are on windows) lets run Now let's use the HTTPS module to create an HTTP server The main difference is we now have to read the key files and load them into an options object, that we pass to .createServer() to create our new shiny HTTPS server. Sometimes we would like to do an HTTP request in order to gather data from a third-party HTTP server. We can achieve this by using the Node HTTP module .request() function. In the following example, we will be calling the postman-echo API which returns whatever we send them. But I would suggest instead of using the core HTTP module for sending requests, you use something more sophisticated and user friendly as Axios . The pros of using something like Axios, is promise abstraction, easier to manage errors on requests, and support for really valuable plugins, like Axios retry. In a lot of situations, we can use a framework like Express to create a server instead of doing directly to the HTTP module. Install express using npm in your command line Express will give you a more elegant way to handle all your API routes, handle session data, and will provide you some plugins for authentication, Express is gonna make your life way easier! We've been through a lot, within this short post, but with these few examples, you should have a good grasp on the core functionalities of the HTTP Node core module. It's ideal to understand the inner functionalities of the HTTP module, but for more complex tasks is recommended to use an abstraction library, such as express in the case you are building servers, or Axios in the case you are creating HTTP requests. Have fun and keep coding! Check out the documentation of the modules we used in this post: If you have any questions - or want feedback on your post - come join our community Discord . See you there!

Thumbnail Image of Tutorial Deep dive into the Node HTTP module

How to Convert JSON to CSV in Node.js

In this article, you'll learn how to create a Node CLI tool to convert a JSON file into a valid CSV file.JSON has become the most popular way to pass data around on the modern web, with almost universal support between APIs and server applications. However, the dominant data format in spreadsheet applications like Excel and Google Sheets is CSV, as this format is the tersest and easiest for users to understand and create. A common function that backend apps will need to perform, therefore, is the conversion of JSON to CSV. In this tutorial, we'll create a CLI script to convert an input JSON file to an output CSV file. If you don't need to customize the conversion process, it's quicker to use a third-party package like json-2-csv. At the end of the tutorial, I'll also show you how to use this instead of a custom converter. To get the most out of this article you should be familiar with the basics of Node.js, JavaScript, and async/await. You'll need a version of Node.js that includes fs.promises and supports async/await and also have NPM installed. I recommend Node v12 LTS which includes NPM 6. Let's now get started making a Node script that we can use to convert a JSON file to CSV from the command line. First, we'll create the source code directory, json-to-csv . Change into that and run npm init -y so that we're ready to add some NPM packages. Let's now create am example JSON file that we can work with called input.json . I've created a simple data schema with three properties: name, email, and date of birth. It'd be very handy to allow our utility to take in a file name input and file name output so that we can use it from the CLI. Here's the command we should be able to use from within the json-to-csv directory: So let's now create an index.js file and install the yargs package to handle CLI input: Inside index.js , let's require the yargs package and assign the argv property to a variable. This variable will effectively hold any CLI inputs captured by yargs. Nameless CLI arguments will be in an array at the _ property of argv . Let's grab these and assign them to obviously-named variables inputFileName and outputFileName . We'll also console log the values now to check they're working how we expect: For file operations, we're going to use the promises API of the fs package of Node.js. This will make handling files a lot easier than using the standard callbacks pattern. Let's do a destructure assignment to grab the readFile and writeFile methods which are all we'll need in this project. Let's now write a function that will parse the JSON file. Since file reading is an asynchronous process, let's make it an async function and name it parseJSONFile . This method will take the file name as an argument. In the method body, add a try / catch block. In the try , we'll create a variable file and assign to this await readFile(fileName) which will load the raw file. Next, we'll parse the contents as JSON and return it. In the catch block, we should console log the error so the user knows what's gone wrong. We should also exit the script by calling process.exit(1) which indicates to the shell that the process failed. We'll now write a method to convert the JavaScript array returned from the parseJSONFile to a CSV-compatible format. First, we're going to extract the values of each object in the array, discarding the keys. To do this, we'll map a new array where each element is itself an array of the object values. Next, we'll use the array unshift method to insert a header row to the top of the data. We'll pass to this the object keys of any one of the objects (since we assume they all have the same keys for the sake of simplicity). The last step is to convert the JavaScript object to CSV-compatible string. It's as simple as using the join method and joining each object with a newline ( \n ). We're not quite finished - CSV fields should be surrounded by quotes to escape any commas from within the string. There's an easy way to do this: It's fairly trivial now to write our CSV file now that we have a CSV string - we just need to call writeFile from an async method writeCSV . Just like in the parse method we'll include a try / catch block and exit on error. To run our CSV converter we'll add an IIFE to the bottom of the file. Within this function, we'll call each of our methods in the sequence we wrote them, passing data from one to the next. At the end, let's console log a message so the user knows the conversion process worked. Let's now try and run the CLI command using our example JSON file: It works! Here's what the output looks like: There's a fatal flaw in our script: if any CSV fields contain commas they will be made into separate fields during the conversion. Note in the below example what happens to the second field of the last row which includes a comma: To fix this, we'll need to escape any commas before the data gets passed to the arrayToCSV method, then unescape them afterward. We're going to create two methods to do this: escapeCommas and unescapeCommas . In the former, we'll use map to create a new array where comma values are replaced by a variable token . This token can be anything you like, so long as it doesn't occur in the CSV data. For this reason, I recommend something random like ~~~~ or !!!! . In the unescapeCommas method, we'll replace the token with the commas and restore the original content. Here's how we'll modify our run function to incorporate these new methods: With that done, the convertor can now handle commas in the content. Here's the real test of our CLI tool...can we import a converted sheet into Google Sheets? Yes, it works perfectly! Note I even put a comma in one of the fields to ensure the escape mechanism works. While it's good to understand the underlying mechanism of CSV conversion in case we need to customize it, in most projects, we'd probably just use a package like json-2-csv . Not only will this save us having to create the conversion functionality ourselves, but it also has a lot of additional features we haven't included like the ability to use different schema structures and delimiters. Let's now update our project to use this package instead. First, go ahead and install it on the command line: Next, let's require it in our project and assign it to a variable using destructuring: We can now modify our run function to use this package instead of our custom arrayToCSV method. Note we no longer need to escape our content either as the package will do this for us as well. With this change, run the CLI command again and you should get almost the same results. The key difference is that this package will only wrap fields in double-quotes if they need escaping as this still produces a valid CSV. So now you've learned how to create a CLI script for converting JSON to CSV using Node.js. Here's the complete script for your reference or if you've skimmed the article:

Thumbnail Image of Tutorial How to Convert JSON to CSV in Node.js

Formatting Dates in Node with Moment.js

Working with dates in a Node app can be tricky as there are so many ways to format and display them. The APIs available in native JavaScript are way too tedious to use directly, so your best option is to use a date/time library. The best known and most flexible option is Moment.js . For this article, I'll presume you understand the basics of JavaScript and Node.js. To install Moment you should have Node and NPM installed on your machine. Any current version will do, but if you're installing from scratch, I'd recommend using the Node v12 LTS which includes NPM 6. Other than for simple use cases and one-offs, the JavaScript Date API will be too low-level and will require you to write many lines of code for what seems like a simple operation. Moment is an incredibly flexible JavaScript library that wraps the Date API giving you very convenient helper methods allowing you to easily perform tasks like: Moment can be used in either the browser or on a Node.js server. Let's begin by going to the terminal and installing Moment: With that done, we can now require the Moment library in a Node.js project: The first thing we'll do to use Moment is to create a new instance by calling the moment method. So what is a Moment instance, and what exactly has been assigned to the variable m in the snippet above? Think of the Moment instance as a wrapper object around a single, specific date. The wrapper provides a host of API methods that will allow you to manipulate or display the date. For example, we can use the add method which, as you'd expect, allows you to add a time period to a date: Note that Moment provides a fluent API , similar to jQuery. It's called "fluent" because the code will often read like a sentence. For example, read this line aloud and it should be immediately obvious what it does: Another aspect of fluent API methods is that they return the same object allowing you to chain additional methods for succinct and easy to read code. We said above that Moment wraps a single, specific date. So how do we specify the date we want to work with? Just like with the native JavaScript Date object, if we don't pass anything to the moment method when we first call it, the date associated with the instance will be the current date i.e. "now". What if we want to create a Moment instance based on some fixed date in the past or future? Then we can pass the date as a parameter to moment . There are several ways to do this depending on your requirements. You may be aware of some of the standards for formatting date strings (with unfortunate names like "ISO 8601" and "RFC 2822"). For example, my birthday formatted as an ISO 8601 string would look like this: "1982-10-25T08:00:15+10:00"; . Since these standards are designed for accurately communicating dates, you'll probably find that your database and other software will provide dates in one of these formats. If your date is formatted as either ISO 8601 or RFC 2822, Moment is able to automatically parse it. If your date string is not formatted using one of these standards, you'll need to tell Moment the format you're using. To do this, you supply a second argument to the moment method - a string of format tokens . Most date string formats are specified using format token templates. It's easiest to explain these using an example. Say we created a date in the format "1982-10-25". The format token template representing this would be "YYYY-MM-DD". If we wanted the same date it in the format "10/25/82" the template would be "MM/DD/YY". Hopefully, that example makes it clear that the format tokens are used for a unique date property e.g. "YYYY" corresponds to "1982", while "YY" is just "82". Format tokens are quite flexible and even allow us to create non-numeric values in dates like "25th October, 1982" - the format token string for this one would be "Do MMMM, YYYY" (note that including punctuation and other non-token values in the template is perfectly okay). For a complete list of date format tokens, see this section of the Moment docs . So, if you want to use non-standard formatting for a date, supply the format token template as the second argument and Moment will use this to parse your date string. For example, shorthand dates in Britain are normally formatted as "DD-MM-YYYY", while in America they're normally formatted as "MM-DD-YYYY". This means a date like the 10th of October, 2000 will be ambiguous (10-10-2000) and so Moment cannot parse it accurately without knowing the format you're using. If we provide the format token template, however, Moment knows what you're looking for: A Unix timestamp is a way to track time as a running total of seconds starting from the "epoch" time: January 1st, 1970 at UTC. For example, the 10th of October, 2000 is 971139600 seconds after the epoch time, therefore that value is the Unix timestamp representing that date. You'll often see timestamps used by operating systems as they're very easy to store and operate on. If you pass a number to the moment method it will be parsed as a Unix timestamp: Now we've learned how to parse dates with Moment. How can we display a date in a format we like? The API method for displaying a date in Moment is format and it can be called on a moment instance: As you can see in the console output, Moment will, by default, format a date using the ISO 8601 format. If we want to use a different format, we can simply provide a formatted token string to the method. For example, say we wanted to display the current date in the British shorthand "DD-MM-YYYY": Here's another example showing a more reader-friendly date: While it's the most popular date library, Moment is certainly not the only option. You can always consider an alternative library if Moment doesn't quite fit your needs. One complaint developers have about Moment is that the library is quite large (20kB+). As an alternative, you might try Day.js which is a 2kB minimalist replacement for Moment.js, using an almost identical API. Another complaint is that the Moment object is mutable, which can lead to confusing code. One of the maintainers of Moment has released Luxon , a Moment-inspired library that provides an immutable and unambiguous API. So now you know how to format dates using Moment and Node. You understand how to work with the current date, as well as how to use standard and non-standard date strings. You also know how to display a date in whatever format you like by supplying a format token template. Here's a snippet summarizing the main use case: Parsing and displaying dates just scratches the surface of Moment's features. You can also use it to manipulate dates, calculate durations, convert between timezones, and more. If you'd like to learn more about these features I'd recommend you start with the Moment docs:

Why to choose @rxweb/ngx-translate-extension for Internationalization in Angular

ngx-translate-extension is an extensive library of ngx-translate. can be installed using npm. Most of the angular developers must have faced a need to integrated a translation library to internationalize the angular application with lot of translation features But have you analyzed the maintainability , simplicity and readability of the code. How much consistency does the library provide while resolving the translation data. Confused about how to compare ??? Since there are a lot of translation libraries available for translation, RxWeb has compared more often used translation libraries When we talk about clean code it includes verifying it at many steps right from the configuration to rendering it into the user interface As RxWeb follows best design practices and approach to use clean code. Here are some glimpse of some of the basic features provided by its translation library. This is the one of the most beautiful feature 🙃 of the library relies on the Angular interpolation for displaying a localized text in the respective view template with double curly braces syntax. Have a look at the code 👇 Component: Html: Json: We use Angular interpolation for Component Scoped Property Binding with translation text, This gives fantabulous 😲 solution of the problem of writing the little lines of business logic in the template and with this way the template is much clear. Component: Html: Json: One of the richest feature of the library 😍, which gives complete flexibility to run the translation property string code same as same as existing component scoped method. Component: Html: Json:

Thumbnail Image of Tutorial Why to choose @rxweb/ngx-translate-extension for Internationalization in Angular

Bye bye, entryComponents?

In this blog post, we will look into what are entryComponents, why is the purpose of entryComponents, why do we no longer need to use them after Angular Ivy.With angular 9, there has been a lot of talking going on around entryComponents, and the Angular developers who had not been much aware of entryComponents have now been interested in knowing more about it. In this blog post, I will try to cover everything that might help you clear up all the thoughts you have around the usage, importance, and the goodbye of entryComponents . The best way to start understanding what entryComponents are would be by first trying to understand component renders in Angular and how really does the compiler play a role here. So just for a visual understanding of what we are talking right now, I have added below a snapshot of the component declarations inside the root module. Basically, there are two types of component declarations, ones which are included as a reference insides templates, and the other ones which are loaded imperatively. When we reference the component inside templates using the component selector, that’s the declarative way of writing components. Something like this: Now, the browser doesn’t really understand what app-instruction-card means, and therefore compiling it down to what browser would be able to understand is exactly the Angular compiler’s job. The imperatively written template for, for example, app-instruction-card would look something like this: This creates an element with your component name and registers it with the browser. It also checks for change detection by comparing the old Value with the current value and updates the view accordingly. We write templates declaratively since the Angular compiler does this rendering bit for us. Now, this is where we we can introduce ourselves to what entryComponents are! Before Ivy, Angular would create Ngfactories for all the components declared in the template and as per the NgModule configuration. During the runtime it would enable tree shaking for the components not used. This is why the dynamic components with no Ngfactories could not be rendered and would throw an error like: Adding the component to the entryComponents array would then make the factories for these dynamic components available at runtime. How Angular specifies a component as an entryComponent behind the hood can be in different ways. Using ngDoBootstrap() and using the same imperative code to declare a component bootstraps it and makes it an entry component into the browser. Now, you’d be wondering that if entryComponents have such a massive role to play in component declaration, why do we as developers see it rarely used?  As we discussed above, entryComponents are mostly specified in two ways, bootstrapping them or defining them in a router definition. But since these happen under the hood, we hardly notice it. However, when working with dynamic components, or web components in Angular, we explicity define the components as entry Components inside the entryComponents array. Inside @ NgModule , we can define the component inside this array: Alright, think for a minute. When we declare multiple components inside the declarations array of our modules, does that mean all these components will be included inside the final bundle? This is where entryComponents have a role to play. So the answer first of all to the above question is NO. All declared components aren’t necessarily present in the final produced bundle. It is if they are specified as entryComponents what decides if they’d be present in the produced bundle. This basically means that all the routable components will be present in the bundle for sure and also the bootstrap component obviously. This would also include the bundles that are declared inside the templates of other components. However, the tree shaking process will get rid of all the unused components with no reference without having to include them inside the package. EntryComponents are mostly explicity defined when dealing with dynamic components, like I said before. This is because there needs to be a reference for the compiler to understand that THOUGH , there is no reference for a particular component in the template or router for now, there is a possibility for it to be rendered dynamically when required. The ComponentFactoryResolver takes care of creating this dynamic component for us but we specify this inside the entryComponents array inside NgModule. If you have worked with dynamic components before, you might have faced an error like: Now knowing why need entryComponents, lets discuss a scenario where in we have created a dynamic component and have added it to the entryComponents array. This basically means that now since we explicitly declared it as an entryComponent, the tree shaker would not prune this component thinking that it doesn’t have a reference in the template. Also, specifying it as an entryComponent would create a component factory for this dynamic component. First, the entryComponent for a particular dynamic component could be added automatically whenever a dynamic component was created to be used. So this would save the developer from specifying it everytime to make sure the compiler knows the component. One more issue with using entry component was referencing the entryComponents declared inside a lazily loaded module. So if a lazy loaded module contains a modal component as an entry component, you’d face an error like No component factory found for this component. This was because the root injector couldn’t be referenced to create a component factory for the entryComponent. One solution, though not very promising was creating a component resolver factory yourself for a particular entry component inside the lazy loaded module to be able to execute it. With Angular 9 coming in and Ivy as the new rendering engine , all the components would be considered as entering components and do not necessarily need to specified inside the entryComponents array. With Ivy, the components will have locality and this means that dynamically importing these dynamic components will always work regardless of presence of entryComponents or ANALYSE _FOR _ ENTRY_COMPONENTS. This is because NOW, the presence of the @Component decorator would mean that the factories would be generated for this component and this happens due to the ngtsc compiler which is like a set of TypeScript transformers and these transformers introduce the static properties θcmp and θfac . These static properties are then able to easily access the code required for instantiating a component/module etc. See the update from the Angular official documentation here: https://next.angular.io/guide/deprecations#entrycomponents-and-analyz for entry_components-no-longer-required A demo here shows how entryComponents are no longer required with Angular 9 https://ng-run.com/edit/c8U6CpMLbfGBDr86PUI0 In this blog post, we talked about the need of entryComponents array when dealing with dynamic components or web components before Ivy . However after Ivy , we do not need NgFactories for the components by specifying it in the entryComponents array and the presence of Component decorator provides the code required for the instantiation of the components ensuring that the compiler is aware of the presence of these dynamic/web components .

Thumbnail Image of Tutorial Bye bye, entryComponents?

A journey to Asynchronous Programming: NodeJS FS.Promises API

This post is about how Node.Js performs tasks, how and why one would use async methods and why and when one should use sync methods. We will also see how to make use of the new FS. Promises API.Throughout this post, we will look at the many ways to write code the asynchronous way in Javascript and also look at: ✅ How asynchronous code fits in the event loop ✅ When you should resort to synchronous methods ✅ How to promisify FS methods and the FS.Promises API To make the best use of this post, one must already be: ✅ Sufficiently experienced in Javascript and NodeJS fundamentals (Variables, Functions, etc.) ✅ Had some level of exposure to the FileSystem module (Optional) Once you're done reading this post you will feel confident about asynchronous programming and will have learned something new but also know: ⭐- If you hang tight there's also some bonus content regarding some best practices when using certain FileSystem methods! Javascript achieves concurrency through what is known as an event loop. This event loop is what is responsible for executing the code you write, processing any event that fires, etc. This event loop is what makes it possible for Javascript to run on a single thread and handle asynchronous tasks, this just means that Javascript does one thing at a time. This might sound like a limitation but it is definitely something that helps. It allows you to work without worrying about concurrency issues and surprisingly the event loop is non-blocking! Of course, unless you as a developer purposely do something to block it. The event loop looks more or less like this: This loop runs as long as your program runs and hence called the event loop. To better understand asynchronous programming though, one must understand the following concepts: Let's take a look at the following code example and see how a typical execution flow looks like: Initially the synchronous tasks console.log() will be run in the order they were pushed into the call stack. Then the Promise thenables will be pushed into the Job Queue , while the setTimeout 's callback function is pushed into the Callback Queue . However, as the Job Queue is given a higher priority than the Callback Queue, the thenables are executed before the callback functions. What's a promise or a thenable, you ask? That's what we will look at in the next topic! As you previously saw in the setTimeout , a callback function is one of the ways that Javascript allows you to write asynchronous code. In Javascript, even Functions are objects and because of this a function can take another function as an argument/parameter and can also be returned by functions. Functions that take another function as an argument is called a Higher-Order Function. A function that is passed as an argument to another function is what is known as a callback. But quite often, having a whole lot of callbacks look like this: Taken from callbackhell , this shows how extremely complex and difficult it might get to maintain callbacks in a large codebase. Don't panic! That's why we have promises. A promise is an object that will produce some value in the future. When? We can't say, it depends. However, the value that is produced is one of two things. It is either a resolved value or a reason why it couldn't be resolved, which usually indicates something is wrong. A promise goes through a lifecycle that can be visualized like the following: Taken from a great resource on promises, MDN . But still, this didn't provide the cleanliness we wanted because it was quite easy to have a whole lot of thenables one after the other. This is why the async/await syntax was introduced, which looks like the following: Looks a whole lot better than what you saw in all the previous code examples! Before we jump into the exciting FS.promises API that I previously used, we must talk about the often unnoticed and unnecessarily avoided synchronous FileSystem methods. Remember how I mentioned previously that you can purposely block the event loop? A synchronous FS method does just that. Now you might have heard quite a lot of times about how you should avoid synchronous FS methods like the plague, but trust me because they block the event loop, there are times when you can use them. A synchronous function should be used over an asynchronous one when: A typical use case to satisfy both the above use cases can be expressed like this: DataStore is a means of storing products, and you'll easily notice the use of synchronous methods. The reason for this use is that it is completely acceptable to use a synchronous method like this as the constructor function is run only once per every creation of a new instance of DataStore . Also, it is essential to see if the file is available and create the file before it will be used by any other function. The asynchronous FileSystem methods in NodeJS, commonly use callbacks because, during the time they were made, Promises and async/await hadn't come out nor were they at experimental stages. The key advantage these methods provide over their synchronous siblings is the fact that you do not end up blocking the event loop when you use them. This allows us to write better more performant code. When code is run asynchronously, the CPU does not wait idly by until a task is completed but moves on to the next set of tasks. For example, let us take a task that takes 200ms to complete. If a synchronous method is used, CPU will be occupied for the entire 200ms but if you use around 190ms of that time is freed up and can now be used by the CPU to perform any other tasks that are available. A typical code example of asynchronous FileSystem methods are: As you can see, they are distinguished by the lack of Sync and the apparent usage of callback functions. When secret.txt has been completely read, the callback function will be executed and the secret data stored will be printed on the console. As humans, we're prone to making silly mistakes and when frustrated or when we experience a lot of stress, we tend to make unwise decisions, one such decision is mixing synchronous code with asynchronous code! Let's look at the following situation: Due to the nature of how NodeJS tackles operations, it is very much likely that the secret.txt file is deleted before we actually read it. Thankfully here though, we are catching the error so we will know that the file doesn't exist anymore. It is best to not mix asynchronous code with synchronous code, being consistent is mandatory in a modern codebase. Way back when FS.promises was introduced, developers had to resort to a few troublesome techniques. You might not need them anymore, but in the unlikely event you end up using an old version of NodeJS knowing how to achieve promisification will help greatly. One method is to use the promisify method from the NodeJS util module: But as you can see, this allows you to only turn one method into its promisified version at a time, so some developers often used an external module known as bluebird that allowed one to do this: Some developers still use bluebird as opposed to the natively implemented Promises API, due to performance reasons. As of NodeJS version 10.0, you can now use FS.promises a solution to all the problems that you'd face with thenables when you use Promises. You can neatly and directly use the FS.promises API and the clean async/await syntax. You do not have to use any other external dependencies. To use the FS.promises API you would do something like the following: It's much cleaner than the code you saw from the callback hell example, and the promises example as well! One must note however that async / await is simply syntax sugar, meaning it uses the Promise API under the hood. File streams are unfortunately one of the most unused or barely known concepts in the FileSystem module. To understand how a FileStream works, you must look at the Streams API in the NodeJS docs. One very common use case of FileStreams is when you must copy a large file, quite often whether you use an asynchronous method or synchronous method, this leads to a large amount of memory usage and a long time. This can be avoided by using the FileSystem methods fs.createReadStream and fs.createWriteStream . Phew! That was long, wasn't it? But now you must feel pretty confident regarding asynchronous programming, and you can now use the FS.promises API instead of the often used callback methods in the FileSystem module. Over time, we will see more changes in NodeJS, it is after all written in a language that is widely popular. What you should do now is check out the resources section and read some more about this or try out Fullstack Node.Js to further improve your confidence and get a lot of other different tools under your belt!

Thumbnail Image of Tutorial A journey to Asynchronous Programming: NodeJS FS.Promises API

A journey to Asynchronous Programming: NodeJS FS.Promises API

This post is about the systematic procedure of how NodeJs performs tasks, how and why one would use asynchronous programming, why and when should one resort to synchronous programming and then we will see how to make use of the new FS. Promises API as opposed to the commonly known callback and synchronous based FileSystem methods.Throughout this post, we will look at the many ways to write code the asynchronous way in Javascript and also look at: ✅ How asynchronous code fits in the event loop ✅ When you should resort to synchronous methods ✅ How to promisify FS methods and the FS.Promises API In order to make the best use of this post, one must already be: ✅ Sufficiently experienced in Javascript and NodeJS fundamentals (Variables, Functions, etc.) ✅ Had some level of exposure to the FileSystem module (Optional) Once you're done reading this post you will feel confident about asynchronous programming and will have learned something new but also know: ⭐- If you hang tight there's also some bonus content regarding some best practices when using certain FileSystem methods! Javascript achieves concurrency through what is known as an event loop. This event loop is what is responsible for executing the code you write, processing any event that fires, etc. This event loop is what makes it possible for Javascript to run on a single thread and handle asynchronous tasks, this just means that Javascript does one thing at a time. This might sound like a limitation but it is definitely something that helps. It allows you to work without worrying about concurrency issues and surprisingly the event loop is non-blocking! Of course, unless you as a developer purposely do something to block it. The event loop looks more or less like this: This loop runs as long as your program runs and hence called the event loop. To better understand asynchronous programming though, one must understand the following concepts: Let's take a look at the following code example and see how a typical execution flow looks like: Initially the synchronous tasks console.log() will be run in the order they were pushed into the call stack. Then the Promise thenables will be pushed into the Job Queue , while the setTimeout 's callback function is pushed into the Callback Queue . However, as the Job Queue is given a higher priority than the Callback Queue, the thenables are executed before the callback functions. What's a promise or a thenable, you ask? That's what we will look at in the next topic! As you previously saw in the setTimeout , a callback function is one of the ways that Javascript allows you to write asynchronous code. In Javascript, even Functions are objects and because of this a function can take another function as an argument/parameter and can also be returned by functions. Functions that take another function as an argument is called a Higher-Order Function. A function that is passed as an argument to another function is what is known as a callback. But quite often, having a whole lot of callbacks look like this: Taken from callbackhell , this shows how extremely complex and difficult it might get to maintain callbacks in a large codebase. Don't panic! That's why we have promises. A promise is an object that will produce some value in the future. When? We can't say, it depends. However, the value that is produced is one of two things. It is either a resolved value or a reason why it couldn't be resolved, which usually indicates something is wrong. A promise goes through a lifecycle that can be visualized like the following: Taken from a great resource on promises, MDN . But still, this didn't provide the cleanliness we wanted because it was quite easy to have a whole lot of thenables one after the other. This is why the async/await syntax was introduced, which looks like the following: Looks a whole lot better than what you saw in all the previous code examples! Before we jump into the exciting FS.promises API that I previously used, we must talk about the often unnoticed and unnecessarily avoided synchronous FileSystem methods. Remember how I mentioned previously that you can purposely block the event loop? A synchronous FS method does just that. Now you might have heard quite a lot of times about how you should avoid synchronous FS methods like the plague, but trust me because they block the event loop, there are times when you can use them. A synchronous function should be used over an asynchronous one when: A typical use case to satisfy both the above use cases can be expressed like this: DataStore is a means of storing products, and you'll easily notice the use of synchronous methods. The reason for this use is that it is completely acceptable to use a synchronous method like this as the constructor function is run only once per every creation of a new instance of DataStore . Also, it is essential to see if the file is available and create the file before it will be used by any other function. The asynchronous FileSystem methods in NodeJS, commonly use callbacks because, during the time they were made, Promises and async/await hadn't come out nor were they at experimental stages. The key advantage these methods provide over their synchronous siblings is the fact that you do not end up blocking the event loop when you use them. This allows us to write better more performant code. When code is run asynchronously, the CPU does not wait idly by until a task is completed but moves on to the next set of tasks. For example, let us take a task that takes 200ms to complete. If a synchronous method is used, CPU will be occupied for the entire 200ms but if you use around 190ms of that time is freed up and can now be used by the CPU to perform any other tasks that are available. A typical code example of asynchronous FileSystem methods are: As you can see, they are distinguished by the lack of Sync and the apparent usage of callback functions. When secret.txt has been completely read, the callback function will be executed and the secret data stored will be printed on the console. As humans, we're prone to making silly mistakes and when frustrated or when we experience a lot of stress, we tend to make unwise decisions, one such decision is mixing synchronous code with asynchronous code! Let's look at the following situation: Due to the nature of how NodeJS tackles operations, it is very much likely that the secret.txt file is deleted before we actually read it. Thankfully here though, we are catching the error so we will know that the file doesn't exist anymore. It is best to not mix asynchronous code with synchronous code, being consistent is mandatory in a modern codebase. Way back when FS.promises was introduced, developers had to resort to a few troublesome techniques. You might not need them anymore, but in the unlikely event you end up using an old version of NodeJS knowing how to achieve promisification will help greatly. One method is to use the promisify method from the NodeJS util module: But as you can see, this allows you to only turn one method into its promisified version at a time, so some developers often used an external module known as bluebird that allowed one to do this: Some developers still use bluebird as opposed to the natively implemented Promises API, due to performance reasons. As of NodeJS version 10.0, you can now use FS.promises a solution to all the problems that you'd face with thenables when you use Promises. You can neatly and directly use the FS.promises API and the clean async/await syntax. You do not have to use any other external dependencies. To use the FS.promises API you would do something like the following: It's much cleaner than the code you saw from the callback hell example, and the promises example as well! One must note however that async / await is simply syntax sugar, meaning it uses the Promise API under the hood. File streams are unfortunately one of the most unused or barely known concepts in the FileSystem module. To understand how a FileStream works, you must look at the Streams API in the NodeJS docs. One very common use case of FileStreams is when you must copy a large file, quite often whether you use an asynchronous method or synchronous method, this leads to a large amount of memory usage and a long time. This can be avoided by using the FileSystem methods fs.createReadStream and fs.createWriteStream . Phew! That was long, wasn't it? But now you must feel pretty confident regarding asynchronous programming, and you can now use the FS.promises API instead of the often used callback methods in the FileSystem module. Over time, we will see more changes in NodeJS, it is after all written in a language that is widely popular. What you should do now is check out the resources section and read some more about this or try out Fullstack Node.Js to further improve your confidence and get a lot of other different tools under your belt!

Thumbnail Image of Tutorial A journey to Asynchronous Programming: NodeJS FS.Promises API

A Closer Look at ReactDOM.render - The Need to Know and More

📰 In this post, we will learn about render(), a core function of ReactDOM. Along with this, we will also take a closer look at the reconciliation process, how ReactDOM works under the hood, and it's use-cases.The ReactDOM library is not often talked about when working with React, however, It is vital for any React developer to know how ReactDOM.render is used to inject our React code into the DOM. It is also good to get a brief idea of how it works under the hood, so we can write better code to accommodate the architecture. According to the React docs , This blog post presents an overview of the ReactDOM library. Since it involves some core ReactJS concepts, it is better to be familiar with the library and how it works. Even if you don't have a deep understanding of React, this blog post can help demystify a lot of concepts related to rendering on the DOM. Before moving to ReactDOM, let's take a brief look at the Document Object Model. The Document Object Model (DOM) is a code representation of all the webpages that you see on the internet. Every element , such as a button or image that you see on the web-page is a part of a hierarchy of various elements within a tree structure. This means each element (except the root element) is a child of another element. This structure enables you to easily interface your JavaScript code with HTML to create highly powerful and dynamic web applications. The way that web developers usually work with the DOM to develop interactive websites, is by finding a DOM node and making the required changes to it, such as changing an attribute or adding a child node. However, for highly dynamic web applications where the DOM needs to be updated frequently, applying these required changes can be a slow process, as the browser ultimately has to update the entire DOM every time. To combat this problem, React works with a Virtual DOM , which is just a representation of the actual DOM that is used behind the scenes to optimize the update process. When a React element is updated, ReactDOM firstly updates the Virtual DOM. After that, the difference between the actual DOM and the Virtual DOM is calculated, and only the unique part . This means that the whole DOM does not need to be updated every single time. This process, also known as “reconciliation” , is one of the things that helps us to build blazing fast Single Pages Applications (SPAs) with React. ReactDOM is a package that provides methods that can be used to interact with the DOM, which is needed to insert or update React elements. It provides many helper functions such as: And more... Most of the time when building Single Page Applications (such as with create-react-app ), we usually create a single DOM container and call the render method once to initialize our React application. Since this method is always used when working with React, learning about the working of ReactDOM.render can greatly benefit us as React developers. The render method can be called the primary gateway between React and the DOM. Let’s say you have defined a React element ( <Calculator /> ), as well as a DOM node to act as a container for that element (div with the ID of “container”). Now, you can use ReactDOM.render to render the element within that container using the syntax given below: ReactDOM.render(< Calculator />, document.querySelector("#container")) The statement above returns a reference to the component that you have created. If it is a functional component, it returns null. Parameters: React elements are defined in a tree structure . This means each element is essentially a child of another React element. However, for the root element, we need to create an element (DOM Node) in our HTML code to act as a container for our React element tree, which can be accessed via ReactDOM.rende r. For a better understanding, the figure below defines an example React component tree: In the above diagram, we can see that <Calculator /> is our root React element , which is rendered into the container div within the HTML code. Breaking it down further, the <Calculator /> element has two children, <Display /> and <KeyPad / >. Similarly, <KeyPad /> is broken down into <NumKeys /> , <FunctionalKeys /> and <Operators /> . This is what those components will look like in code: As defined above, the reconciliation process is when the React virtual DOM (AKA the tree of React elements) is checked against the actual DOM, and only the necessary updates are reflected. In the previous versions of React, a linear synchronous approach was used to update the DOM. It became obvious very quickly that this can greatly slow down our UI updates, negating the whole reason why reconciliation exists in the first place. In React 16, the team re-wrote major parts of the reconciliation algorithm to make it possible to update the DOM asynchronously. This is possible because of the Fiber architecture which lies at the core of the new implementation. Fiber works in two phases, This phase is triggered when a component is updated through state or props. Therefore, the standard React protocol is followed, where a component is updated and lifecycle hooks are called, after which the DOM nodes that need to be updated are calculated. In Fiber, these activities are termed as “work” . Before React 16, having multiple work tasks could make our user interface look and feel sluggish because a recursive approach was used with the call stack. The problem with recursion is, that the process only stops when the call stack is empty, which can result in higher time complexity. As more work is performed, more computational resources are utilized. The updated architecture utilizes the Linked List Data Structure within its algorithm to handle updates. This, used in conjunction with the requestIdleCallback() function provided by newer browsers makes it possible to perform asynchronous tasks (or work) more efficiently. Each “work” unit is represented by a Fiber Node . The algorithm starts from the top of the Fiber Node Tree and skips nodes until it reaches the one which was updated. Then it performs the required work and moves up the tree until all the required work is performed. During the commit phase, the algorithm first calls the required lifecycle methods before updating or unmounting a component. Then, it performs the actual work, that is, inserting, updating or deleting a node in the DOM. Finally, it calls the post-mutation lifecycle methods componentDidMount and componentDidUpdate . Along with this, React also activates the Fiber Node Tree that was generated in the previous phase. Doing this ensures synchronization between the tree of React elements and the DOM, as React knows what the current status of the Virtual DOM is. Whew... however abstract, we now have a faint idea of how the reconciliation process works behind the scenes. Let’s move back to the practical side of things, and discuss how we can use ReactDOM.render in different scenarios. Assuming we are working with a Single Page Application, we will only need to instantiate the root element (usually App.jsx ) at a single location in the DOM. Therefore, oftentimes the lone index.html file is not even touched at all. All we need to do is create a container, and render our React root element to it, as described above in the Calculator example. We can also use ReactDOM.render to integrate React in a different application. This is why we call React a library, not a framework . You can use it as little as possible or as much as possible, as it completely depends on your use-case. We can create a wrapper function for our React module. All the required props can be passed to the function, and sent down to the component which is rendered through ReactDOM.render, just like it would be for a single page application. It is important to export the function, as we will need to call it within our JavaScript code. In the index.html file shown above, we are importing the script that renders our root React component, and then we call the function using plain JavaScript. We can also pass parameters, such as props for our React component, through this function. It is better to initialize React components after the document has loaded. You can also use ReactDOM.render multiple times throughout the application. This means that if you are creating a new website, or modifying an existing website that does not use React yet, you can use ReactDOM.render to generate some pages using React, while others do not use the library. When you unmount a React component from the DOM, it is important to call unmountComponentAtNode() according to the syntax given below to ensure there are no memory leaks. ReactDOM.unmountComponentAtNode( DOMContainer ) This is why the React team suggests using a wrapper for your React root elements. Doing this ensures that you can mount and unmount React nodes according to the design of your website. For example, moving from one React page to another that uses separate React root elements, it is possible to integrate the wrapper API with the page transition, which can automatically unmount the component for you. Similarly, you can also write the logic in your wrapper to unmount a React root component within the same page, as soon as its work is done. To update a component, you may call ReactDOM.render again for the same React element and DOM node. However, one thing that is important to note is that ReactDOM.render completely replaces all of your props each time this function is called. This means, you must also pass all of the other required props to the element that you are rendering, which can be an issue. The reason for the props being replaced is because React elements are immutable. The React team explains how you can create a wrapper to set the previous props again . Although render() is the most commonly used ReactDOM method, there are a few more available at your disposal. Let’s take a look at two of those. It is similar to render() , however, it is used when rendering pages through Server Side Rendering (SSR) . It integrates the necessary event handlers and functions to the markup that has been generated. Portals can be created to render a component outside of the React component tree of that specific component. This can be highly useful to generate elements somewhere unrelated on the page. To sum it up, ReactDOM acts as a powerful interface between our React component tree and the DOM. The most commonly used method from ReactDOM is render() , which can be used to connect entire React applications to the DOM. Once the React element and it’s child tree have been inserted into the DOM, the reconciliation process handles everything related to updates. Due to this process, whenever we update a part of our component tree, only the changes in that part are reflected in the actual DOM, thereby saving us a lot of extra computation. React and ReactDOM provides powerful functions such as render() that make it easy to create fast and snappy Single Page Applications through React, which makes React such a popular front-end development library.

Thumbnail Image of Tutorial A Closer Look at ReactDOM.render - The Need to Know and More

Atomic Design for Developers: Atomic Engineering

In my first article on Atomic Design for Developers I discussed a good way to organize components in your project. I highly suggest reading it first, although I'll briefly summarize it as well. If you don't know what atomic design is, please read Brad Frost's article on it . So, the big question left after Atomic Design for Developers: Project Structure is how do we actually implement atomic design in practice? First we'll do a quick review for new readers on atomic design, but those who read the last article can skip down to the Building the Google Search Page section. Atomic design is Brad Frost’s methodology for building design systems. The idea is that we can take the basic building blocks of living things and give our UI a hierarchical structure based on it. Brad Frost defines five main stages of UI components: Atoms are the simplest form of UI, consisting of things like headers, labels, input fields, buttons, etc. Molecules are a combination of atoms that form more complex pieces of our UI, such as a search field with a submit button. Organisms build on top of molecules and orchestrate larger parts of the UI. This can include a list of products, a header, forms, etc. Organisms can even include other organisms. Templates are where our pages start to come together, giving context to all of our organisms and molecules by giving them a unified purpose. For example, a template for a contact page will have organisms for headers and forms, and molecules for text fields and navigation bars. Pages , as the name implies, is our final page with all its content. The difference between pages and templates is that templates don’t provide any content. As a way to demonstrate atomic design in practice, I created a painfully simple rip off of the Google search page. You can type a query, click search, redirect to /results page, and see a list of hard coded results. The article will focus on the high level principles, and the sandbox project will give a good reference for how to use those principles in practice. Let's pretend that our design team just passed us a couple of mocks that look like these: Okay, so just by looking at the mocks we can begin to think about how the page elements would be categorized into atoms, molecules, organisms, and templates. I find it helpful to start from large components, then see how we can break them down into smaller ones. First, let's take a look at the app bar. The app bar is a fairly complex component. We have a profile image, menu items, and one of them is even a dropdown menu. Because our app bar is orchestrating smaller elements, the app bar is a good candidate for an organism. What about our profile image, images link, and gmail link? These definitely qualify as atoms. They can't really be broken down further. That just leaves the "me" dropdown menu. This is a fairly complex component compared to our links and profile image. Organisms can be composed of other organisms, so it's okay to categorize our dropdown menu as an organism as well. Our search area is pretty simple. It's just the Google logo, an input field, and a button. These would all be atoms by themselves, but we should pay attention to how elements relate to each other. For example, look at the relationship between the search input and the google search button: As a whole, we don't see a complex management of components that we would expect from an organism. Molecules give a little more context to the relationship between atoms, and tend to search one specific function. This is why we will categorize the search bar + google search button as a molecule. So the part that says "Results for: awdwadwa" is just a header. I'd just classify this as an atom and move on. The more interesting part is the search results. Given that each search result consists of an icon, a header, and a description, we may classify this search result item as a molecule. What happens when we are managing a list of results? Well, now we are back in organism territory. We might have an organism called ResultList that deals with laying out our results. In summary, here is how we categorized our page elements so far: What about our templates? Well, that should be pretty easy. They are just the skeleton of the mocks that we were given. In other words, it's the page without specific data. Okay, so far we have looked at everything from a design standpoint. We broke down the mocks our design team gave us, but it's still unclear how we actually want to build out our components. Here is a diagram giving the atomic design stages a little more engineering context. In general, I would suggest starting from the top of the hierarchy and working your way down (templates -> organisms -> molecules -> atoms) unless you are absolutely positive which category a component belongs to. The smaller your components get, the more defined their role becomes. For example, remember how we came up with a SearchField molecule in the mocks? Take a look at molecules/SearchField in the example project. While it is a valid molecule, did it really make sense to create one? We can only use it in one place in our app (strict styling & functionality), and the atoms that make up our SearchField can easily live in isolation. It's hard to avoid, but we can minimize the amount of refactoring and re-categorizing of components by starting from large to small. This can be tricky. Atomic design principles differentiate the two by saying organisms are relatively complex compositions of atoms and molecules, and molecules are simple compositions of atoms. How can we differentiate between the two categories in code? Here are a few tips to help you figure it out. The definition of molecules and organisms can get a little foggy in practice, but you want to make sure there is a clear line between pages and templates (and then rest of your components for that matter). Storybook is a good tool to verify that you are on the right track, and I'll talk more about it later. If your template can't be presented in a story without hooking up providers, stores, etc, it's a sign that you are doing something fishy. Pages make your web application come to life. Updating state based on API calls? Do that here. Pulling in images from your cloud storage service? Do that here. Using a HOC from Redux or React Router to get access to state or the history object? Wrap your page and pass it down. Don't wrap your templates or other components. It's kind of like how we isolate our APIs from our frontend. We are just taking that a step further and isolating our UI from our app state. That may not be your goal, but you will want to think that way, and here is why. Damn right buddy. All that effort spent isolating your UI components from your application logic has made what is often times a massive team effort into a day's worth of work. The second your design team turns around and tells you they want to build a component library + design system, you can suavely take your sunglasses off and say, "way ahead of you pal". Even if there is work to do on the components themselves, whatever exists won't break, and you can rest assured that breaking changes will be due to implementation details and not package dependencies. You can organize your storybook the same way you organize your components. This is great for both you and your design team and makes collaborating easier. You shouldn't have to worry about components breaking due to dependency issues because you built them to be as independent as possible. For storybook users, how many times have you had a story fail to compile because you didn't wrap a component in some kind of provider? When you move storybook into a new repo with your components, once again, everything will just work ! If it doesn't work because your template component depends on React Router instead of being passed router props from the page, then Storybook will let you know that very fast. How many times have you had failed tests because your component requires some HOC or injected dependency from another library? Could be Redux Form, redux itself, react router, or anything at all. Atomic engineering is really just an enforcer of dependency injection. In the example project, when we type a query and click search, the app goes to the /results page using react router's history object and then displays the results. If you look at the SearchField component in the example project, it doesn't care about the actual routing aspect. All it cares about is being given an action to perform on search. So when we test it, we can easily mock an onSearch function. This isn't a new concept. Atomic engineering simply helps you enforce this more easily. 90% of startups fail. Many are forced to pivot their product direction, but cannot stay alive long enough to do so because they had to completely scrap their original product and build a new one. Everything in the /components folder can be leveraged on your next project (except templates because of their context specific nature) with little to no setup needed, even if you aren't building a "component library" per-se. Please take a look at the example project and read through the comments. Even though the application is dead simple, it will give more context to the things I'm saying. If you have questions, personal experiences you wish to share, or anything else, please leave a comment with your thoughts! This isn't a bullet proof method, but I think it's better than the traditional container vs component approach.

Thumbnail Image of Tutorial Atomic Design for Developers: Atomic Engineering

A journey through the implementation of the useState hook

When the React team released hooks last year, my first thoughts were, "what new piece of wizardry is this which has been unleashed upon us?!". Your initial impressions may have been less dramatic but I am sure you shared my sentiments. Digging into the implementation of new framework features is one of my favoured methods of understanding them better, so in this post I will attempt to walk you through part of the workflow that is triggered when the useState hook is used. Unlike other articles which teach you how to use a hook or implement your own version, I am more concerned with the code behind the hook, so to speak. One way I like to break down features is by focusing on their underlying data structures. Components, hooks, context...all these are React features but when you peel back the layers, what are they really? I find this approach immensely useful because it helps me not to be fixated on framework specific concepts but on the tool used to build the framework - JavaScript. The first thing we will do is create a very simple example application (they work best for this kind of deep dive): Our app renders a button which displays a counter and increases it by one it whenever it is clicked. It consists of a solitary function component. Hooks were created to encapsulate side effects and stateful behaviour in such components. If we look at this code through our data structure lens, we can see that: Before we go into what happens next, let us remind ourselves of some of the behaviour we expect to see based on how hooks work: All code taken from React's source is from version 16.12.0 React hooks are stored in a shared ReactCurrentDispatcher object which is first initialised here when the app loads. It has one property called current , which has null as its initial value, and is allocated hook functions corresponding to React's mount or update phase. This allocation happens in the renderWithHooks function: In the source, there are comments above this code which explain that if the following check nextCurrentHook === null is true, this indicates to React that it is in the mount phase. You might be interested to know that the only difference between the dev and production versions of the HooksDispatcher objects is that the dev hooks contain sanity checks such as ensuring that the second argument for hooks like useCallback or useEffect is an array. At the beginning of our app's lifecycle, our dispatcher object is HooksDispatcherOnMountInDEV . Each time a hook is called, our dispatcher is resolved by this function which checks that we are not trying to call a hook outside of a function component. useState itself actually calls a function called mountState to execute the core of its work: mountWorkInProgressHook returns an object which starts off with null as the value for all its properties but ends up like this at the end of the function: In mountWorkInProgressHook we see that return [hook.memoizedState, dispatch] maps to our state initialisation expression const [count, setCount] = React.useState(0) . With regards to the hooks part of React's initialisation, this is where our interest ends. At the beginning of this article we said we would be looking at things from a data structure perspective. React exposes hooks to us as functions but under the hood, they are modelled as objects. Why this is the case will become apparent in the next section. What happens when we click the button and update our counter? From what we know so far, we should expect the answer to that question to also include answers to these sub-questions: The answer to question two is comes from the declaration of dispatchAction . Its expected arguments are dispatchAction(fiber, queue, action) . In the mountState function we can see that fiber and queue are already passed in, so action refers to whatever argument we pass to setCount . This makes sense since dispatchAction is bound function . The argument fiber in dispatchAction provides the answer to question three. I have written about React's new fiber implementation here but essentially, a fiber is an object that is mutable, holds component state and represents the DOM. React creates a tree of these objects and that is how it models the entire DOM. Every component has a corresponding fiber object. You can actually view the fiber node associated with any HTML element by grabbing a reference to the DOM element and then looking for a property that begins with __reactInternalInstance . The fiber object we get when dispatchAction is for our ComponentWithHook component. It has a property called memoizedState and its value is the hook object created during mountState 's execution. That object has a property called next with the value null . If ComponentWithHook had been written like this: Then memoizedState on its fiber object would be: Hooks are stored according to their calling order in a linked list on the fiber object. This order is why one of the rules of using hooks is they should not be used in loops, conditions or nested functions. For example, the docs provide the following code to illustrate what could go wrong: When React first initialises the app, the fiber node for this component has a property called _debugHookTypes with the following array ["useState", "useEffect", "useState", "useEffect"] . When setName is invoked, maybe to clear the name field in the form for instance, the updateHookTypesDev function runs and compares the currently executing hook and its expected index in the array. In the example above, it finds useState instead of useEffect and throws an error. But what if you tried tricking React by writing this: You will be caught in the renderWithHooks function thanks to the linked list implementation. When the error occurs, React is expecting to be working on useState('Poppins') and for the next property on its hook object to be null . However, in the example above, it encounters the useState('Nyasha') hook instead and find its next property pointing to the hook object for useState('Poppins') . Moving on to our first question (which state field on the hook object gets updated?), we can answer it by looking at the hook object we get once the component has been updated and re-rendered: A lot has changed. The most significant being queue.last and baseUpdate . Their changes are identical because baseUpdate contains the most recent action that changed baseState . If you were to increment the counter again and pause at the updateFunctionComponent , for example, the action property on queue.last would be 2 but remain as 1 on baseUpdate . The explanation for queue.last 's changes are as follows: Because it accepts a function which will take the current state as its first argument, we could re-write our setCount function like this: Some developers advocate giving a function to useState's update function if your state update depends on your previous state. Another thing to note is that if the newly computed state is the same as the current state, React bails out without scheduling a re-render. I came across many references to reducers whilst looking at useState and this is because the function which actually does the state update is called updateReducer . It is also used to perform updates by the useReducer hook. As I said earlier, if last.eagerState is computed, it means React has not yet entered the render phase. When it eventually does so, it realises that the new state has already been "eagerly computed", so it is applied to the lastRenderedState , memoizedState and baseState properties. We began this article by asking what happens when we introduce the useState hook to a codebase. It involves the creation of a shared object and a linked list. This post is by no means an exhaustive or complete explanation of useState but I hope it has provided some interesting insight into React's internals. As I mentioned at the outset, focusing on the data structures behind library and framework features can yield some interesting learnings. From a developer's point of view, hooks are functions which encapsulate stateful code and side effects but internally, React is working on a linked list. I have found this approach not only educational but empowering because it reminds me that despite the complexity of the tools we use, they are built using some of the JavaScript language features you and I use daily.