Clap
0|1|

Crates to know#

The vibrant Rust ecosystem makes building complex applications in Rust much easier. This ecosystem continues to grow so that even if something is currently missing, chances are a new crate is on the horizon that will solve your problem. And if not, you can help the community by writing a crate yourself.

We will expand our understanding of building a web application by adding persistence via a SQL database. There are a few tools for working with SQL in Rust, from directly accessing the database via wrappers around C libraries to full blown Object Relational Mapping (ORM) libraries. We will take somewhat of a middle-ground approach and use the Diesel crate.

Diesel#

Diesel is both an ORM and a query builder with a focus on compile time safety, performance, and productivity. It has quickly become the standard tool for interacting with databases in Rust.

Rust has a powerful type system which can be used to provide guarantees that rule out a wide variety of errors that would otherwise occur at runtime. Diesel has built abstractions that eliminate incorrect database interactions by using the type system to encode information about what is in the database and what your queries represent. Moreover, these abstractions are zero-cost in the common cases which allows Diesel to provide this safety with the same or better performance than C.

The crate currently supports three backends: PostgreSQL, MySQL, and Sqlite. Switching between databases is not completely free as certain features are not supported by all backends. However, the primary interaction between your code and the database is in Rust rather than SQL, so much of the interactions with Diesel are database agnostic. For managing common database administration tasks like migrations, Diesel provides a command line interface (CLI) which we will show how to use.

The Diesel getting started is a great resource for an overview of how Diesel works and what it can do.

Building a blog#

We are going to build a JSON API around a database that represents a blog. In order to capture most of the complexity of working with a database we will have a few models with some relationships. Namely, our models will be:

  • Users

  • Posts

  • Comments

A Post will have one User as an author. Posts can have many Comments where each Comment also has a User as author. This provides enough opportunity for demonstrating database interactions without getting overwhelmed by too many details.

We will start out by getting all of the necessary infrastructure in place to support Users. This will involve putting a few abstractions in place that are overkill for a single model, however they will pay dividends when we subsequently add Posts and Comments.

As we have already gone through quite a bit of details related to Actix and building an API, the focus here will be more on the new parts related to working with persistence. Therefore some of the details of working with actix-web will be assumed.

Getting setup#

Let's get started like with all Rust projects and have Cargo generate a new project:

Our first step will be editing our manifest to specify the dependencies that we are going to need:

The new dependencies beyond what we have previously used with actix-web are diesel, and dotenv. The diesel dependency we have already discussed, but there is a bit of a new twist here.

Cargo supports the concept of features which allow crates to specify groups of functionality that you can select when you depend on a crate. This is typically used for conditionally including transitive dependencies and conditional compilation to either include or exclude code based on what you do or don't need. Good crates allow you to pick and choose only what you want to minimize compilation times and binary sizes. The Rust compiler can do some work to remove code that is unused in your final binary but using features is one way to ensure this happens and make it explicit what you are using.

One thing Diesel uses features for is to specify what backend you want to use. For our purposes we are going to use Sqlite. As an embedded file based database we will be able to work with persistence without having to setup the external dependency of a database server. We will be clear as to what parts of this code depend on this database choice.

The other feature of Diesel that we are specifying, r2d2, adds the r2d2 generic database connection pool crate as a dependency of Diesel and turns on some functionality. Any reasonable production system will use a connection pool for interacting with a database, the reasons are best described in the r2d2 documentation:

Opening a new database connection every time one is needed is both inefficient and can lead to resource exhaustion under high traffic conditions. A connection pool maintains a set of open connections to a database, handing them out for repeated use.

Finally, we include dotenv as a dependency which is a tool for managing environment variables. By default Dotenv looks for a file named .env in the current directory which lists environment variables to load. As we will need it later, let's create this file with one variable DATABASE_URL with a file URL to a file in the current directory which will hold our Sqlite database:

Installing the Diesel CLI#

As we previously mentioned, Diesel has a CLI for managing common database tasks. Cargo has the ability to install binary crates on your system via the cargo install command. Therefore, we can install the Diesel CLI with:

By default this installs a binary at ~/.cargo/bin but it is possible to configure this.

As we mentioned Diesel uses features for turning on and off certain functionality. Crates that use features typically have a default set that is turned on if you otherwise do not specify anything. It is possible to turn off this default behavior via the command line argument --no-default-features, and for the CLI we do this because the default is to include support for all three database backends. This will cause errors running CLI commands if you do not have some of the necessary components installed. So we turn off the default and then turn on only Sqlite support via --features sqlite.

Migrations#

The Diesel CLI binary is named diesel, so we can setup our project for working with Diesel by running the setup command:

By default this assumes the existence of an environment variable named DATABASE_URL or a .env file with this variable defined. It is possible to pass this manually to each CLI command, but using one of the aforementioned methods is much more convenient. This is one reason we created the .env file above.

This will create a migrations directory as well as a diesel.toml file. If you are using Postgres this command will also create a migration that creates a SQL function for working with timestamps. This does not happen for other backends.

Diesel manages migrations using a directory called migrations with a subdirectory for each migration. The name of the subdirectories are a timestamp prefix followed by a name. Within each migration directory are two self-explanatory files: up.sql and down.sql. Diesel uses SQL for migrations rather than a custom DSL. Therefore changing databases requires rewriting most of your migrations.

Running migrations#

The primary use for the CLI is managing migrations which uses the migration command with further subcommands. To see all migrations and whether they have been applied we use the list subcommand:

To run all pending migrations we use the run subcommand:

You can get help for diesel in general by calling diesel --help or for a particular command by passing --help with that command, i.e.:

Schema#

So you manage your database using the Diesel CLI and a set of migration files written in SQL. But all of that operates outside of Rust. To connect your database to Rust, Diesel uses a schema file that is a code representation of your database. Running the migrations also modifies this code representation at src/schema.rs. The name of this file, and whether this happens can be controlled via settings in diesel.toml, but the default is usually what you want.

Users#

Let's get started creating our application which will support managing users. We are not going to get into the weeds of authentication or authorization, rather our focus will be on manipulating persisted data via a JSON API.

Create users migration#

The first step is to add a migration that will create the database table users to hold our users:

This creates a directory migrations/YYYY-MM-DD-HHMMSS_create_users with two empty files. In up.sql let's put the SQL for creating our users table:

Each user has an id which will be the primary key for fetching users as well as the key used for referencing users in other tables. We also require each user to have a username which is a string. You can get arbitrarily creative here depending on your domain, but for simplicity we only have these two columns.

This syntax is specific to the Sqlite backend so it should be clear why all migrations could need to be rewritten if you decide to use a different backend. For example, some databases allow you restrict the size of VARCHAR columns which might be a reasonable thing to do for a username, but Sqlite does not actually enforce any limit you happen to write.

The corresponding down.sql file should perform whatever transformations are necessary to undue what happens in up.sql. In this case as the up migration is creating a table, we can drop the table in our down migration:

You can do whatever you want in up and down, but for your own sanity, the schema should be the same before running the migration and after running up followed by down. That is down should revert the schema to the prior state. As some migrations will update data in the database it is not necessarily true that the data in the database is preserved by running up followed by down. The reversibility of migrations is typically only a statement about the schema, but the exact semantics are up to you.

Make username unique#

We create yet another migration, this time to add an index to our users table. We do this to ensure that usernames are unique in our system and that we can lookup users by their username quickly. First we have diesel create the files for us with:

Then we add the code to create the index to up.sql:

Again this is Sqlite syntax, although all backends have a similar syntax for this operation. The important part of this index is the UNIQUE keyword. This let's us rely on the database for the enforcement of unique usernames rather than introducing racy code that tries to manage this at the application layer.

As before, we want our down migration to reverse what we did in up, so we drop the index in down.sql:

Schema#

We run our migrations via the Diesel CLI with:

Once this runs successfully two things will be true. First, the database file at blog.db will be in the state after running all of our up migrations. You can verify this by opening the Sqlite shell:

and dumping the schema:

Note that the __diesel_schema_migrations table is automatically created by Diesel and it is how the CLI knows which migrations have or have not run.

The second thing that happens is the file src/schema.rs is updated with Rust code which Diesel uses to understand the state of your database. This file should look like:

It doesn't look like much because of the magic of the macros that Diesel provides. Here only the table! macro is used, but there are a few more that we will encounter as our data model evolves.

Building the application#

With the database taken care of for the moment, we turn now to our actual application. We are going to build out the scaffolding which supports users and also will be easily extensible later on.

Main#

As we have done before, we are going to split our application into a small main.rs, which is the binary entry point, and keep everything else in a library which our main can call in to. So without further ado, let's add the following to main.rs:

Everything here we have seen in our simpler web applications except the interaction with dotenv. Calling dotenv().ok() sets environment variables based on the contents of the .env file in the current directory and ignores any error that might result. Dotenv only sets environment variables from that file if they are not already set so you can always override the file by setting the variable directly in your environment before running the program.

Clap
0|1
 

This page is a preview of Fullstack Rust

Please select a discussion on the left.