Hanami & Fly.io Complete Example
Not dogma, just what I'm learning and thinking about right now.
Comments and feedback are welcome on Mastodon.
"If you're thinking without writing, then you just think you're thinking."
In a previous article, I walked through deploying a basic, “hello world” Hanami app on Fly.io. The task was rather simple, we just used the dockerfile and
fly.toml files created for us by the fly cli tool. In this article, we will walk through the deployment of a fully functioning Hanami app, including persistence using Postgres. This article assumes that you have a fully functioning Hanami app that you wish to deploy. The test app I created for this walk-through used ROM-rb and Postrgres. The instructions provided here may not work with other Gems and will not work with other databases, like MySql. Finally, adding persistence to Hanami will not be covered in this article. I will cover that topic elsewhere.
As with the previous walk-through, you will need to sign up with Fly.io and download the
fly cli tool. You are required to provide a credit card on your account, but you may be able to complete this walk-through without incurring any charges (depending on the options you select).
Next, you will need your Hanami app. This part can be a bit tricky, as we will see later. If your app already has persistence implemented, with one or more relations created, then we must perform an extra step to get our database created on Fly.io. If you have not yet implemented any relations, then we can perform this step from an SSH console after deploying our app. Let’s get started!
Ready for Launch!
Our first step is to configure our app for deployment on Fly.io. As with the prior walk-through, we will use the fly cli tool to generate a dockerfile and
fly.toml file. In this case, we will also create a Fly database cluster. To start, navigate to the root directory of your app and run
fly launch. The tool will prompt you to enter a name for your app, it may only consist of lower-case letters and hyphens. Next, you will select a deploy region.
The next prompt will ask if you want to setup a Postgres database. Enter yes, and then configure your cluster. For more information, this article describes how Fly Postgres clusters work. The cli will offer a cluster of three advanced machines and 10GB of persistent storage. These defaults will incur charges if you accept them. Charges are time-based, so it may not be a big deal for you to try them out, and then suspend or destroy the apps when you are done. However, if you select two (or one, if you don’t care about redundancy for the sake of this demo) “shared-cpu-1x 256mb” machines, and less than 3GB of storage, you should not incur any charges at all on the free “Hobby Plan.”
The cli will then display the password and connection info for your cluster. By default, your cluster will receive a Fly.io app name of
<your-app-name>-db. Save the connection info somewhere now, because there is no way to ever see it again.
If you prefer, you can also set up a Fly Postgres cluster manually, but this way is simpler and has the added convenience of saving your database connection string, complete with password, as an encrypted Fly secret ENV value. This is very handy for managing production connections to the database.
Finally, you can decline the prompt to setup Redis and any prompt asking if you would like to deploy–we have more work to do first.
Creating a Custom Hanami Dockerfile
If you examine your project directory now, you will see that the Fly has generated a dockerfile and
fly.toml file for your app. To recap, Fly.io uses dockerfiles to configure images to host your app, but then runs the images as VMs, rather than containers. Roughly speaking, the dockerfile will configure the image, and the
fly.toml file will contain any further configuration needed to orchestrate the VMs, including scaling rules, etc.
In the previous walk-through, we were able to use the Fly-generated dockerfile as-is. That will not be the case now. We will need to start from scratch. When I did this the first time, it took me a full day of fiddling and troubleshooting. I looked at several examples to understand what was needed, including Tim Riley’s Decaf Sucks dockerfile, which is itself based on the Ruby on Whales example from Evil Martians. I even got a timely assist from Sam Ruby (author of Agile Web Development With Rails) in the Fly.io forums.
In the end, I based my solution primarily on the official Fly.io Rails Dockerfile, customized for Hanami. I did not need to make any changes to the
fly.toml file, but I did write a small “entrypoint” script to handle database chores in production. Let’s look at the dockerfile first.
This is what I ended up with:
1 # syntax = docker/dockerfile:1 2 3 ARG RUBY_VERSION=3.2.2 4 FROM ruby:$RUBY_VERSION-slim as base 5 6 # Hanami app lives here 7 WORKDIR /hanami 8 9 # Set production environment 10 ENV HANAMI_ENV="production" \ 11 BUNDLE_WITHOUT="development:test" \ 12 BUNDLE_DEPLOYMENT="1" 13 14 # Update gems and bundler 15 RUN gem update --system --no-document && \ 16 gem install -N bundler 17 18 19 # Throw-away build stage to reduce size of final image 20 FROM base as build 21 22 # Install packages needed to build gems 23 RUN apt-get update -qq && \ 24 apt-get install --no-install-recommends -y build-essential libpq-dev 25 26 # Install application gems 27 COPY --link Gemfile Gemfile.lock ./ 28 RUN bundle install && \ 29 rm -rf ~/.bundle/ $BUNDLE_PATH/ruby/*/cache $BUNDLE_PATH/ruby/*/bundler/gems/*/.git 30 31 # Copy application code 32 COPY --link . . 33 34 35 # Final stage for app image 36 FROM base 37 38 # Install packages needed for deployment 39 RUN apt-get update -qq && \ 40 apt-get install --no-install-recommends -y curl postgresql-client && \ 41 rm -rf /var/lib/apt/lists /var/cache/apt/archives 42 43 # Run and own the application files as a non-root user for security 44 RUN useradd hanami --home /hanami --shell /bin/bash 45 USER hanami:hanami 46 47 # Copy built artifacts: gems, application 48 COPY --from=build /usr/local/bundle /usr/local/bundle 49 COPY --from=build --chown=hanami:hanami /hanami /hanami 50 51 # Entrypoint prepares the database 52 ENTRYPOINT ["./bin/fly-entrypoint"] 53 54 # Start the server 55 EXPOSE 8080 56 CMD ["bundle", "exec", "rackup", "--host", "0.0.0.0", "--port", "8080"]
Let’s go through the dockerfile line-by-line to see what it does. Note that the dockerfile uses a multi-stage build process. This is done to create a final image that is as small as possible and only includes the packages that are absolutely essential to run our app.
1- Uses a “magic comment” parser directive to set the syntax version for the dockerfile. Many dockerfiles don’t include a syntax parser directive, but we need it because our dockerfile uses some fancy commands only available in newer versions of the dockerfile specification. Similar to a Gemfile spec, we are specifying that this script is using syntax compatible with the dockerfile specification version 1.x.x.
3- Sets the Ruby version used to select the container base image. This needs to be manually synced with the project’s
4- Selects the base image for the container using the Ruby version set in line 3 and assigns it the label “base.” This base image will come with the specified Ruby version pre-installed. By default, this is a Debian Linux image, and we are selecting the “slim” (minimal/small) version. We could also specify a specific Debian release, or choose a completely different base image.
7- Creates a directory
hanami in the current (anonymous) build stage, and sets it as the working directory for the remainder of the script. This will be the home for our app files.
10- Sets the
HANAMI_ENV environment variable to “production.” Note that the
DATABASE_URL environment variable was already set for us as a global “secret” value. It will be available in all machines deployed in this app cluster, and will be used by our running app to connect to the database cluster. You can manage this secret in your Fly.io dashboard for this app, but you cannot view its value once it is set (you know, ‘cus it’s a secret).
11 & 12- Sets environment variables used by Bundler.
15- Update the Gem package lists. Remember that this base image came with Ruby and Gem pre-installed.
16- Install Bundler.
20- Start a new build stage, cleverly labeled “build.”
23- Update the Linux package lists.
24- Install two essential Linux packages. Build essentials includes all of the “make” tools needed to compile executables on Linux. They are kernel-specific, and are needed to install certain gems that require native extensions. Libpq-dev includes the kernel-specific extensions needed to install and use a Postgres client gems on this version of Linux.
27- Copy Gemfile and Gemfile.lock from our app directory to the
/hanami) of the “build” stage. Note that this line uses the
--link flag on the
COPY command. This is one of the “fancy” commands that required us to include a syntax parser directive in line 1. You can read about the
--link flag here. As I understand it, this will streamline future rebuilds of your image by allowing this layer to “rebase” onto changes made earlier in the build process, such as when the base image is updated to a new Ruby or Linux version, without having to rebuild every layer created in our dockerfile. Sounds like dark magic to me, but whatever.
bundle install to load your dependencies. Remember that gems that compile native extensions, such as Nokogiri, will have the necessary build tools available because they were installed on line 24.
29- Delete Bundler’s cached files. They won’t be needed in production.
32- Copy our application files into the
/hanami directory of the “build” stage.
36- Start a new build stage. As this is the final build stage, no label is necessary.
39- Once again, we need to update the Linux packages because we need the Postgres client software in our final build.
40- Install the Postgres Linux client for our current Linux distribution.
41- Immediately delete all of the package lists and caches from the
apt-get update on line 39. We won’t need this on production servers that will be destroyed and re-provisioned, not updated.
44- Create a new “hanami” user, assign
/hanami as the home directory and set bash as the preferred shell. We’ll use this user to run our app to avoid security issues associated with running the app as root.
45- Assign the “hanami” user to the user-group “hanami.”
48- Copy the installed gems from the bundle directory in the “build” stage to the same directory in the current, final stage.
49- Copy our app files from the
/hanami directory in the “build” stage to
/hanami in the current stage, and change ownership of all files to the “hanami” user.
52- Designate the script located at
./bin/fly-entrypoint in our project as the “entrypoint” to launch our code. We will write this script and use it carry out any database operations we need executed on the server before our app is launched.
55- Expose port 8080 to the world. This is the port our app will listen on to serve requests. The value 8080 is a Fly.io standard that we could change if we wanted to. If you change this value, you will have to also update your
fly.toml with the new port value to let Fly.io know about the change.
56- This is the command used to launch our app when this container is put into production. Since we have an entrypoint script, that script will be called first, with this command passed to the entrypoint as parameters. Note that we are using the preferred “exec” form of
CMD, where the phrases of the command are passed as an array of strings. We could use whatever command is appropriate here, including
bundle exec hanami server, as long as we also specify this host IP (0.0.0.0) and port (8080) to work properly with Fly.io.
Okay, that was a lot! As I said, we don’t need to make any changes to the
fly.toml generated by the fly cli, but we do need to write that entrypoint script.
The Chicken and the Egg Problem
Before we actually write our script, we need to discuss the chicken and the egg problem. In my case, I had a running app that included persistence, migrations, etc. However, when I attempted to deploy to Fly.io, I had to start with a fresh database cluster that had no database created yet. One way to handle this would be to spin up my application, access the console, and run
bundle exec bin/hanami db create. Unfortunately, I couldn’t access a console because my app would crash on deployment–because the database was not available. You see my problem.
If you are starting your project from scratch, and adhere to a “deploy early and deploy often” philosophy, you can avoid this problem entirely. If you deploy before setting up persistence, then you can access the app’s console using the fly cli and run any Hanami commands you need before trying to connect to the database. The command to access the console is
fly ssh console from your app directory.
If you already have persistence implemented, then your choices are different. One approach might be to add a line to your entrypoint script to create the database, and then immediately remove this line and re-deploy once the database is created. What I ended up doing was connecting to a Fly console on the database cluster itself, and then used the Postgres
createdb utility to create my database. Once it was created, I used
bundel exec bin/hanami db migrate in my entrypoint script to migrate the production database.
The commands I used to create the database were:
> fly ssh console -a <postgres-cluster-app-name> > createdb -h 127.0.0.1 -U postgres -W <db-name>
In the first line, I launched a Fly console, but I had to specify the application name of my Postgres cluster (as assigned above). Once connected, I used the next command to tell Postgres to create the database. The
-h flag specifies the host IP address to use. 127.0.0.1 is the Fly.io standard. The
-U (uppercase) specifies the Postgres user name, in this case “postgres.” The
-W (uppercase) flag tells Postgres to prompt for the user password before executing the command (it would eventually, but this saves a round trip). The password is the one assigned when the cluster was created. Hopefully you wrote it down. Finally, I provided the name of the database. If you follow Hanami conventions, this should be
<your-app-name>_production. Once this was done, I was finally able to launch my app on Fly.io and let the entrypoint migration command handle the rest.
My last step was to access my Fly app’s console to seed the production database with production data. I set up a seed script for this. If you are migrating an existing app, you could dump your existing production database and load it into your Fly.io Postgres cluster manually. Thankfully, that is beyond the scope of this walk-through.
Let’s get to that entrypoint script . . .
This script could be as simple or as complex as you like. Someone with more experience might write a script that could detect whether the database exists, and then create it if necessary. I took the simplest approach I could think of. My entrypoint script looks like this:
1 #!/usr/bin/env bash 2 # exit on error 3 set -o errexit 4 5 # Uncomment for first deploy 6 bundle exec bin/hanami db create 7 8 # Uncomment after first deploy 9 # bundle exec bin/hanami db migrate 10 11 # Execute the container's main process (CMD in the dockerfile) 12 exec "$@"
Don’t forget to
chmod +x bin/fly-entrypoint after creating this file to allow it to be executed. Let’s look at what it does.
1- The “shebang,” to specify bash as the scripting language.
3- Direct the script to exit with an error if any command returns an error. In other words, we aren’t trapping errors, we’re bailing and calling for help.
6- This line is un-commented for the first deploy only. If the deploy is successful, this line can be commented out or removed entirely and line 9 un-commented. You should re-deploy immediately after the db is created and these lines are changed to get a fully migrated app in production.
9- This line should be un-commented after the first successful deploy, once you are confident that the database was created successfully. This line will remain in our script to run any pending migrations every time the app is deployed.
12- Recall that the
CMD arguments from our dockerfile will be passed to this
ENTRYPOINT script as parameters. We access them using
$@. This line executes the parameterized command, launching our app.
If you have more chores to complete upon each deploy, just add them to your
fly-entrypoint script. I should also point out that you can use whatever script solution you choose for
ENTRYPOINT. I chose bash because it was fast and simple (and I had examples to follow). You could just as easily write a Ruby script. You could also create a rake task to handle your deploy chores, and either invoke that task from your entrypoint script or as the
ENTRYPOINT command itself. Just remember that your script’s last responsibility is to execute the
CMD parameterized command to get things up and running.
And that is all! I hope this is helpful! Corrections, questions, and feedback are welcome on Mastodon. Happy Hanami-ing!