Tutorials on Serverless

Learn about Serverless from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Building an API using Firebase Functions for cheap

When I am working on personal projects, I often find the need to setup an API that serves up data to my app or webpages. I get frustrated when I end up spending too much time on hosting and environment issues. These days what I end up doing is hosting the API using Cloud Functions for Firebase . It hits all my requirements: The official name is Cloud Functions for Firebase. In this article, I am going to call it Firebase Functions. This is mostly to distinguish it from Google's other serverless functions-as-a-service: Cloud Functions. You can read more about the differences here . From that page: While I'm not going to write a mobile app in this article, I like to use Firebase Functions because: If all this isn't confusing enough, Google is rolling out a new version of Cloud Functions called 2nd generation which is in "Public Preview". So in order to move forward, let's identify our working assumptions: After all this is complete, you should have a single file called firebase.json and a directory called functions . The functions directory is where we'll write our API code. We'll take the emulator out for a spin. Congrats, you have Firebase Functions working on your local system! To exit the emulator, just type 'Ctrl-C' at your terminal window. This is all very exciting. Let's push our new "hello world" function into the cloud. From the command line type: The output should look similar, but not exactly to: And if we navigate to the Function URL we should get the 'Hello from Firebase!' message. Exciting! Do you see how easy it is to create Firebase Functions? We've done all the hard part of setting up our local environment and the Firebase project. Let's jump into creating an API using Express Install express: Next, edit the index.js file to look like: Then if you run You can load up your api locally. Note the URL link on the emulator is a little different -- it should have 'api' added at the end like: You should see our 'Hello World' message. Now for more fun, add '/testJSON' to the end of your link. You should see the browser return back JSON data that our API has sent: Now finally, let's deploy to the cloud: Note that when you try to deploy, Firebase is smart enough to detect that major changes to the URL structure have occurred. You'll need to verify that you did indeed make these changes and everything is ok. Since this is a trivial function, you can type Yes . Firebase will delete the old function we deployed earlier and create a new one. Once that completes, try to load the link and validate your API is now working! This article has walked you through the basics of using Firebase Functions to host your own API. The process of writing and creating a full featured API is beyond the scope of this article. There are many resources out there to help with this task, but I hope you'll think about Firebase Functions next time you are starting a project.

Thumbnail Image of Tutorial Building an API using Firebase Functions for cheap

Connecting Serverless Django to an Amazon RDS Instance (Part 2)

Disclaimer - Please read the first part of this blog post here before proceeding. It walkthrough the initial steps of creating a VPC with two subnets, one private and one public, to restrict access of the RDS database to the deployed Lambda function within the same network. AWS Relational Database Service (RDS) is an AWS cloud computing solution that enables developers to quickly provision, deploy and scale popular relational databases, such as PostgreSQL, on reliable infrastructure. To create an RDS instance, visit the Amazon RDS console and click on the "Create database" button. Select "Standard create" as the database creation method. This gives us control over the configuration options, such as whether backups should be performed. Then, select "PostgreSQL" as the engine type. You can choose a specific version within the proceeding dropdown, but for this tutorial, let's stick with already pre-selected version 12.5-R1 . Under templates, choose the "Free tier" to avoid being billed for RDS-related costs while following this tutorial. For larger, commercial projects, you would create, at minimum, two separate RDS instances: one for a production environment and one for a development environment. Label the instance with a unique, valid identifier. For this tutorial, let's name it serverless-django to indicate its connection to the serverless Django application. Then, set superuser credentials for the instance. Remember to use a strong password! Once you set these credentials, add them to the .env.production file ( PG_DB_USER for the master username and PG_DB_PASSWORD for the master password). Leave the database instance class as db.t2.micro , which corresponds to the free tier. Each class tier comes with different memory and CPU specifications, with db.t2.micro offering the lowest specifications. Leave the storage settings as is, but uncheck storage autoscaling since our database will never exceed its maximum storage capacity. The "Availability & durability" section is automatically disabled because of our earlier selection of the free tier. Multi-AZ deployment should be enabled in the production environment of a large, commercial project to ensure the database is durable and highly available. Under the "Connectivity" section, select the VPC created in first part of this tutorial. For subnet group, select the option "Create new DB Subnet Group" and disallow public access. We do not want to assign a public IP address to the database, otherwise, anyone from the public Internet can access the database and its contents if they correctly guess your superuser credentials or exploit a known vulnerability of the database. The database should only accept connections from our Lambda function within the same VPC. For the VPC security group, create a new VPC security group for this database and name it serverless-django-db-sg . Keep the availability zone to "No preference." Note : You can choose an existing VPC security group, which would be the VPC's default security group named "default," which is listed under the "Security Groups" table in the AWS VPC console. Keep the database authentication method as password authentication. Give the database instance a name of ServerlessDjangoDB . Record this same name within the .env.production file for the environment variable PG_DB . Keep the database parameter group as default.postgres12 . Uncheck automatic backups and performance insights. These features are useful for the production environment of a large, commercial project. Keep the remaining settings the same. Click the "Create database" button to create the database instance. However, an error message appears when you try to create the database instance. Our VPC requires at least two private subnets to cover at least two availability zones to satisfy the database's availability zone coverage requirement. Availability zones provide isolation within a region. When an availability zone experiences anytime network issues of downtime, other availability zones are highly unlikely to experience the same issues. Subnets covering these availability zones will be available while issues are being resolved in the affected availability zones. Therefore, we need to revisit the AWS VPC console and add another private subnet that covers a different zone than the current private subnet. Open the AWS VPC console in a new browser tab/window. Select the VPC's private subnet from the table of subnets. Within its details, take note of the subnet's availability zone ID (in the below screenshot, it is use1-az3 ). When we create another private subnet, its availability zone cannot have the same availability zone ID. Let's create a new private subnet. Select the VPC in the dropdown. Under the "Subnet settings" section, you can assign any name to the subnet, but for this tutorial, it will simply be named "Private subnet." When you choose an availability zone for this subnet, pick one that is different from the one already covered. In the below screenshot, since use1-az3 is already covered by our other private subnet, let's pick US East (N. Virginia) / us-east-1f . Set the private subnet's IPv4 CIDR block to 10.0.16.0/21 . Then, click the "Create subnet" button to create the private subnet. Once the private subnet is successfully created, close out the browser tab/window Toggle back to the previous tab/window with the database creation wizard and click the "Create database" button. The error message disappears, and you should now be able to successfully create the database instance. Currently, the database is publicly accessible from the public Internet. Let's limit this access to only resources within the same VPC. Select the database (identifier serverless-django ) within the table of databases in the Amazon RDS console. Scroll down to the "Security group rules" section. Since the database's inbound traffic is being restricted, click on the first security group with the type "CIDR/IP - Inbound" to modify the inbound traffic rules. Under the "Inbound rules" section, click on the "Edit inbound rules" button. Delete the default inbound rule and add a new inbound rule. Set this rule's type to "PostgreSQL" and source to "Custom." If you created a new VPC security group for the database, then select that security group from the field with the magnifying glass. However, if you chose to use an existing VPC security group, then select that security group from the field with the magnifying glass. Click the "Save rules" button. You can now find this new security group rule inside of the security group rules' table with type "EC2 Security Group - Inbound." Continue on to the last part of this tutorial here , which dives into deploying the serverless Django application onto AWS Lambda with Zappa.

Thumbnail Image of Tutorial Connecting Serverless Django to an Amazon RDS Instance (Part 2)

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Connecting Serverless Django to an Amazon RDS Instance (Part 1)

Serverless computing revolutionizes cloud infrastructure by having developers provision services and define resource demands with code rather than manually configuring hardware from scratch. Designing an application around a serverless architecture and workflow not only saves development time, but also drives down costs for scaling and operation expenses. Cloud computing platforms, such as Amazon Web Services (AWS), offer and maintain a plethora of serverless computing services to automate basic tasks and allow developers to deploy new services quickly. With AWS Lambda, instead of routing all incoming requests to a single server instance that hosts multiple API endpoints, we can distribute these endpoints across many functions. Since each function runs its own container, different functions can be allocated different amounts of memory and CPU power depending on the function's needs. Coupled with spinning up containers only in response to events and keeping them idle during inactivity (not online 24/7 like a traditional server), the savings compound exponentially while still serving end users. Regardless of traffic volume, AWS Lambda automatically creates instances of your function to meet this demand and only charges you for the function's run time and the total number of times it is invoked (per 1 million requests). The fact that AWS manages this entire infrastructure layer (e.g., system patches, security updates, etc.) by itself lets developers focus more on their own infrastructure. AWS Lambda provides runtimes to support various languages, such as Python and Java. There are many open-source libraries and utilities available for developing, deploying and securing serverless applications. For Python applications, Zappa makes the deployment process to a serverless environment simple via several built-in CLI utilities and a zappa_settings.json configuration file. Whether you want to deploy small microservices written with a framework like Flask , or large web applications written with a framework like Django , Zappa handles the deployment the same way without any additional framework-specific configuration. A limitation of Zappa is its inability to scaffold other AWS services/resources, such as RDS database instances (Aurora, PostgreSQL, MySQL, MariaDB, Oracle or SQL Server), during deployment. If your application directly connects to and accesses these AWS services/resources, then we must first provision those services/resources within the AWS Management Console. Below, I'm going to show you how to create an RDS database instance (specifically PostgreSQL) and connect it to a Django application deployed onto a Lambda function. Both the database and Lambda function will be placed within a private subnet of a VPC (virtual private network) to protect them from public Internet access. To get started, clone the following repository to your local machine: This repository's README.md file contains instructions on how to deploy and test a serverless Django application using Docker and Zappa. For Zappa to successfully deploy the Django application to AWS, visit the IAM console and enable the following permission policies: During deployment ( zappa deploy <stage> ), Zappa packages the application, dependencies and virtual environment into a Lambda-compatible archive, which is uploaded to an S3 bucket. Once it sets up the Lambda function, the IAM roles and policies and API Gateway resource, Zappa deletes the archive from the S3 bucket. Using middleware, Zappa turns API Gateway requests into WSGI requests, which can be processed with Python. Once done, Zappa returns the response through the API Gateway. The server is only alive during the execution of the function. If you decide to undo the deployment of the application via the command zappa undeploy <stage> , then Zappa automatically tears down the published API Gateway and Lambda function. Inside of the AWS IAM console, create a new user by clicking the "Add user" button. Name the user serverless-django-admin and enable programmatic access, which means an access key ID and secret access key must be provided to an AWS API, CLI, etc. to interact with AWS services. Under "Attach existing policies directly," check the permission policies mentioned above. Verify the list of permissions and create the user. Copy the access key ID and secret access key and paste them within the .env.* files. Set the environment variable AWS_ACCESS_KEY_ID to the access key ID and AWS_SECRET_ACCESS_KEY to the secret access key. A small network consists of resources connected to one other, such as a Django application connected to a database. Unrestricted access (from the public Internet) to any of these resources leaves them vulnerable to attacks from outside parties. If the database contains any sensitive or personally identifiable information, and this data becomes compromised, then you will be held liable for the data breach! Ideally, the database should have a firewall to filter out all traffic that does not originate from the Django application. A virtual private cloud (VPC) lets you to isolate parts of your network ("subnets") from the public Internet by controlling inbound and outbound traffic via security groups that act as "virtual firewalls." Amazon VPC comes with 65,536 private IP addresses available to be allocated to resources by default, which accounts for both small and large private networks. The network can be divided into subnets, which each groups a subset of the network's resources. Subnets are either private or public. Resources within a private subnets are not accessible from the public Internet. However, they can only access the public Internet under special circumstances. Resources within a public subnets are accessible from and can access the public Internet. To create a VPC, visit the AWS VPC console and click on the "Launch VPC Wizard" button. This opens the VPC wizard, which guides you through the process of creating a VPC step-by-step. The wizard presents four options: Since the Lambda function serving the Django application will be publicly accessible from the public Internet via an invocation URL and the PostgreSQL database will be protected from the public Internet, select the "VPC with Public and Private Subnets" option. The public subnet will host a network address translation (NAT) gateway, which allows resources within a private subnet to be accessed from the public Internet. The private subnet will host our Lambda function and RDS database, and the security group rules of the RDS database will be modified to only accept traffic from other resources within the same network. This way, only the Lambda function is accessible from the public Internet. Let's create these two subnets. The IPv4 CIDR block determines the range of IP addresses available for allocation within a subnet. For the public subnet, set the public subnet's IPv4 CIDR block to 10.0.0.0/21 and the private subnet's IPv4 CIDR block to 10.0.8.0/21 . Each has over two thousand IP addresses, which should be more than sufficient. If you need more IP addresses, then pick different blocks. For the public subnet to be reachable from the public Internet, you must obtain an Elastic IP address , which is a static, public IPv4 address, and associate it to the NAT gateway. To obtain an Elastic IP address, skip to the "Obtaining an Elastic IP Address" section and follow the directions. Copy the allocation ID of the Elastic IP address and come back to the wizard. Paste this ID into the "Elastic IP Allocation ID" field. Note : If you want to keep the private subnet completely isolated from the public Internet (and accessible from a corporate network, etc.), then go back to "Step 1: Select a VPC Configuration" of the wizard and choose the option "VPC with a Private Subnet Only and Hardware VPN Access." Once the VPC is successfully created, you will find the VPC listed under "Your VPCs." Additionally, you will find the VPC's private and public subnets listed under "Subnets." Under "Security Groups," you will find the VPC's default security group. All instances of the Lambda function belong to this security group. When we provision our RDS database, we will need to assign a security group to it, so take note of the security group's ID for later. An Elastic IP address is easy to obtain. Inside of the AWS VPC console, select "Elastic IPs" under the "Virtual Private Cloud" collapsible list in the left sidebar. Click on the "Allocate Elastic IP address" button. Leave the Elastic IP address settings on the default settings, unless you pick a specific network border group , which determines where AWS advertises IP addresses. Click the "Allocate" button to allocate this IP address. Once the Elastic IP address is allocated successfully, copy the allocation ID to associate to the VPC's NAT gateway. Continue on to the second part of this tutorial here , which dives into provisioning an AWS RDS PostgreSQL database and setting its security group rules.

Thumbnail Image of Tutorial Connecting Serverless Django to an Amazon RDS Instance (Part 1)

Serverless Django with Zappa Coming Soon!

Serverless Django with Zappa is coming soon (May 2021) 🎉 Get a sneak peek of this course and others with a newline pro subscription. With newline Pro you get access to over 500 lessons, thousands of pages, across all of our books and Guides. You get full access to our best-sellers like Fullstack D3 , Fullstack React with TypeScript, Fullstack Vue and tons more. You also get Early Access to new Guides like  The newline Guide to   Serverless Django with Zappa, The newline Guide to Creating a React Hooks Library  and The newline Guide to Creating React Native Apps for Mac . We have 45+ Guides scheduled this year that go in-depth into the wide range of things you have to know as a developer in 2021. We'll be raising the price to newline Pro as the library grows, but if you subscribe now, you can lock in your price for as long as you stay subscribed. Join newline Pro here  

Thumbnail Image of Tutorial Serverless Django with Zappa Coming Soon!

Set up a Django App to Respond to S3 Events on AWS Lambda

I'm going to show you how to set up your Django app to respond to S3 events on AWS Lambda. The tool we're using to get Django running on AWS Lambda is Zappa . Follow along to see how responding to events works with this setup. ... When running your Django app on AWS Lambda using Zappa, it's easy to send and receive events from other AWS services . Zappa lets you get events from: Let's configure our Django with Zappa project to detect when new files are uploaded to an S3 bucket. You'll need a Django project configured with Zappa. You can create a Django project, then follow the guide for adding Zappa to your Django setup . Once your project is set up properly, log into the AWS console and create an S3 bucket with a unique name. The bucket won't need public access for this tutorial since we'll upload files directly using the AWS console. Once the bucket is ready, click on 'Properties' and record the Amazon Resource Name (ARN) for the bucket. At the root of your project, create a new file called aws_events.py . In this file, we'll add the handler function that will accept all our S3 events. When AWS invokes your function, two objects are passed in: the event object and the context object. The context object contains metadata about the invocation, function, and execution environment. The S3 event contains: Here's the code to accept the event: Our event handler code above prints out the event information and exits. Finally, we need to tell Zappa to register our event handler function with AWS so that AWS can start sending events. Add the following to the project's zappa_settings.json file: With all the changes saved, go ahead and push these changes to the cloud. In order to see the event details that our code prints out, let's activate the Zappa console log. You can do this by running: This gets Zappa to show you what your Django project is printing. Leave it up and running and remember you can exit anytime by pressing Ctrl-C . Head back over to your AWS console and find your S3 bucket page. Under the Objects tab, find the upload button. Go ahead and upload a file. Any file will do, but it's best to send a smaller one for expediency. After a few seconds, you should see in the terminal window the full event information from S3. It should look something like the following: A few things to note here. The filename is testfile.jpg and the bucket name is newline-upload-bucket . Of course, your file and bucket will be named differently. Also note the formatting is a little wonky, but still readable. You can try uploading a few more files. Maybe try two or three at once. You'll see each upload handled by your Django project as independent events. At this point, we have many options. Maybe your app will create a thumbnail if an image was uploaded. Or scan a document for keywords. Alternatively, your app might send an email to the client that uploaded the file confirming the receipt of the file. To dive in even more into setting up your Python apps to run serverless on AWS Lambda, check out our latest course, The newline Guide to Serverless Django with Zappa. 

Thumbnail Image of Tutorial Set up a Django App to Respond to S3 Events on AWS Lambda