A Complete Server: Deployment

It doesn't matter how impressive our server is if nobody else can use it. We need to ship our work for it to matter. In this chapter we'll cover different ways to get our API live and different things we'll want to keep in mind for production apps.

Running our app on a remote server can sometimes be tricky. Remote servers are often a very different environment than our development laptops. We'll use some configuration management methods to make it easy for our app to both run locally on our own machine and in the cloud (i.e. someone else's datacenter).

Once our app is deployed, we'll need to be able to monitor it and make sure that it's behaving the way we expect. For this we want things like health checks, logging, and metrics visualization.

For fun we can also test the performance of our API with tooling designed to stress-test our app such as ab (Apache Bench) or siege.

What You Will Learn#

Deployment is a huge topic that warrants a book of it's own. In this chapter we are going to discuss:

  • Using a VPS (Virtual Private Server) and the various deployment considerations there

  • Using a PaaS (Platform as a Service) and show a step-by-step example of how to deploy our Node app to Heroku, including a Mongo database.

  • Deploying to Severless Hosts like AWS Lambda and the considerations there

  • Features you often need to support for a production app such as secrets managment, logging, and health checks - and we'll give suggestions for tooling there.

  • and lastly, security considerations both within your server and some of its interaction with JavaScript web apps.

By the end of this chapter, you'll a strong orientation for the various pieces required to deploy a production app. Let's dive in.

Deployment Options#

Today, there's no shortage of options for us to deploy our API. Ultimately, there's a tradeoff between how much of the underlying platform we get to control (and therefore have to manage) and how much of that we want handled automatically.

Using a VPS (Virtual Private Server)#

On one end of spectrum we'd set up our own VPS (virtual private server) on a platform like DigitalOcean (or Chunkhost, Amazon EC2, Google GCE, Vultr, etc...). Running our app on a server like this is, in many ways, the closest to running it on our own computer. The only difference is that we'd need to use a tool like SSH to log in and then install everything necessary to get our app ready.

This approach requires us be familiar with a decent amount of system administration, but in return, we gain a lot of control over the operating system and environment our app runs on.

In theory, it's simple to get our app running: sign up for a VPS, choose an operating system, install node, upload our app's code, run npm install and npm start, and we're done. In practice, there's a lot more we'd need to consider. This approach enters the realm of system administration and DevOps -- entire disciplines on their own.

Here, I'm going to share many of the high-level considerations you need to make if you're deciding to run your app on a VPS. Because there is so much to consider, I'll be giving you some guidelines and links to reference to learn more (rather than a detailed code tutorial on each).

Security & System Administration#

Unlike our personal computers, a VPS is publicly accessible. We would be responsible for the security of our instance. There's a lot that we'd have to consider: security updates, user logins, permissions, and firewall rules to name a few.

We also need to ensure that our app starts up with the system and stays running. systemd is the standard approach to handle this on linux systems. However, some people like using tools like [pm2](https://pm2.io/doc/en/runtime/overview/) for this.

We'd also need to set up MongoDB on our server instance. So far, we've been using the MongoDB database that we've been running locally.

HTTPS#

The next thing we'd have to take care of is HTTPS. Because our app relies on authentication, it needs to use HTTPS when running in production so that the traffic is encrypted. If we do not make sure the traffic is encrypted, our authentication tokens will be sent in plain-text over the public internet. Any user's authentication token could be copied and used by a bad actor.

To use HTTPS for our server we'd need to make sure that our app can provision certificates and run on ports 80 and 443. Provisioning certificates isn't as painful as it used to be thanks to Let's Encrypt and modules like [greenlock-express](https://www.npmjs.com/package/greenlock-express). However, to use ports 80 and 443 our app would need to be run with elevated privileges which comes with additional security and system administration considerations.

Alternatively, we could choose to only handle unencrypted traffic in our Node.js app and use a reverse proxy (Nginx or HAProxy) for TLS termination.

Scaling#

There's another issue with using HTTPS directly in our app. Even if we change our app to run an HTTPS server and run it on privileged ports, by default we could only run a single node process on the instance. Node.js is single-threaded; each process can only utilize a single CPU core. If we wanted to scale our app beyond a single CPU we'd need to change our app to use the [cluster](https://nodejs.org/api/cluster.html#cluster_cluster) module. This would enable us to have a single process bind to ports 80 and 443 and still have multiple processes to handle incoming requests.

If we rely on a reverse proxy, we don't have this issue. Only the proxy will listen to ports 80 and 443, and we are free to run as many copies of our Node.js app on other ports as we'd like. This allows us to scale vertically by running a process for each CPU on our VPS. To handle more traffic, we simply increase the number of cores and amount of memory of our VPS.

We could also scale horizontally, by running multiple instances in parallel. We'd either use DNS to distribute traffic between them (e.g. round-robin DNS), or we would use an externally managed load balancer (e.g. DigitalOcean Load Balancer, Google Cloud Load Balancing, AWS Elastic Load Balancing, etc...).

Scaling horizontally has an additional benefit. If we have fewer than two instances running in parallel, we'll have downtime whenever we need to perform server maintenance that requires system restarts. By having two or more instances, we can route traffic away from any instance while it is undergoing maintenance.

Multiple Apps#

If we wanted to run a different app on the same VPS we'd run into an issue with ports. All traffic going to our instance would be handled by the app listening to ports 80 and 443. If we were using Node.js to manage HTTPS, and we created a second app, it wouldn't be able to also listen to those ports. We would need to change our approach.

To handle situations like this we'd need a reverse proxy to sit in front of our apps. The proxy would listen to ports 80 and 443, handle HTTPS certificates, and would forward traffic (unencrypted) to the corresponding app. As mentioned above, we'd likely use Nginx or HAProxy.

Monitoring#

Now that our apps are running in production, we'll want to be able to monitor them. This means that we'll need to be able to access log files, and watch resource consumption (CPU, RAM, network IO, etc...).

The simplest way to do this would be to use SSH to log into an instance and use Unix tools like tail and grep to watch log files and htop or iotop to monitor processes.

If we were interested in better searching or analysis of our log files we could set up Elasticsearch to store and our log events, Kibana for searching and visualization, and Filebeat to move the logs from our VPS instances to Elasticsearch.

Deploying Updates#

After our app is deployed and running in production, that's not the end of the story. We'll want to be able to add features and fix issues.

After we push a new feature or fix, we could simply SSH into the instance and do a simple git pull && npm install and then restart our app. This gets more complicated as we increase the number of instances, processes, and apps that we're running.

In the event of a faulty update where a code change breaks our app, it's helpful to quickly roll back to a previous version. If our app's code is tracked in git, this can be handled by pushing a "revert" commit and treating it like a new update.

 

This page is a preview of Fullstack Node.js

Start a new discussion. All notification go to the author.