Experiences With Hasura After Four Months in Production

May 03, 2020  •  12 min read

Table of Contents

Introduction

Last year, in October 2019, we decided at WorkClout to develop a new product from the ground-up. The product we wanted to build was unlike anything we’d built thus far. The high-level requirements were:

  1. Must be released on three platforms: Web, iOS and Android
  2. Must be real-time (the product had a collaborative aspect to it)
  3. The product’s API gateway must be GraphQL-based
  4. Must be built on top of a relational database

On top of that, we wanted to go live by January 2020. Yep, that meant we had three months to build a full-stack product and launch it on three different platforms.

A colleague of mine had been pitching me on Hasura. Seeing as how Hasura solved requirements #2 - #4, it seemed like the right time to give it a go.

This article documents our unbiased experiences learning, deploying, and maintaining Hasura in production.

Caveats

Like all technology adoption stories, there are caveats, and these caveats may have skewed the ease/difficulty of adoption. I felt it’s essential to call them out:

  • We would be rewriting our database and code from scratch
  • The only pre-existing constraint was how we do deployment (Kubernetes)
  • We went into this with an open mind, willing to make architectural compromises to ensure Hasura succeeds

Contact Me With Questions

If, after reading this, you have any questions, feel free to @ me on Twitter. You can also email me at: richardgirges [ at ] gmail.com (I am a bit slower responding to emails).

Initial R&D Phase

One thing was evident during this phase: Hasura is incredibly powerful. The value of having out-of-the-box real-time GraphQL subscriptions, on top of a relational database of all things, is unmatched compared to some of the other competitive solutions out there that we considered.

Real-Time No Matter What

One of our biggest concerns was that the real-time GraphQL subscriptions were only real-time when updating data via Hasura’s GraphQL mutations. What if we wanted to make updates to the database directly?

You’ll be pleased to know that doing inserts/updates directly in the Postgres database does indeed trigger real-time subscription updates in Hasura/GraphQL.

The use of Postgres triggers is one of the most powerful features of Hasura. They’ve managed to tie GraphQL subscriptions to Postgres triggers, allowing you to take the wheel and execute custom business logic as needed, without compromising on the real-time aspect.

Vendor Lock-In

An obvious caveat of Hasura is that it does lock you in by replacing many aspects of your backend architecture:

  • Hasura is your API Gateway (and replaces most/all of your CRUD API services)
  • Hasura dictates your approach to permissions/ACLs
  • Hasura controls (if not, has a commanding presence over) your SQL data model

At WorkClout, we’ve adopted a culture of protecting our database above all else; it’s easier to replace code than it is to replace a database. So we took comfort in the fact that we could always “eject” from Hasura by retaining the Postgres database and dropping everything else.

Hasura enforces many best practices when it comes to data modeling, namely:

  • Foreign key constraints
  • Indexes
  • Lookup tables

Because of this, “ejecting” from Hasura will more than likely leave you with a reliable database schema - one that would make your DBA content, if not proud.

That’s not to say that ejecting from Hasura would be easy; you’d still need to rewrite your API gateway, API services, permission strategy, and more. We decided that this is okay given the value Hasura brings to the table.

Hasura Doesn’t Replace Backend Engineers

It seems this is a common misconception when I bring up Hasura in conversation. Hasura does not replace your backend engineers or backend stack.

Some engineers I’ve spoken to have compared Hasura to the likes of Firebase. While there are some minor parallels, this couldn’t be further from a fair comparison.

Hasura does eliminate the majority of the backend logic you’d write yourself, but you still need custom backend services - unless you were building an unauthenticated public API with absolutely no custom business logic.

Authentication & Permissions

Authentication

The Hasura docs and blog have extensively covered authentication; this is the #1 reason you need a custom backend service.

All in all, authentication turned out to be pretty straightforward and uneventful.

Permissions

Hasura’s Permissioning strategy is frankly better than most permission strategies I’ve implemented myself.

  • Every table gets its own set of permission schemes for each CRUD operation

    • Insert permissions
    • Select permissions
    • Update permissions
    • Delete permissions
  • You define custom roles (permission schemes are specific to each role)
  • You’re able to filter rows/columns in the result set based on the permission schemes
  • You have access to session variables when setting up permissions, allowing you to do comparison checks

In other words: Permissioning in Hasura is exceptionally versatile. One nice-to-have feature would be the ability to do arbitrary comparison checks against session variables. In other words, the ability to compare non-database values against a session variable. There is currently an open issue regarding this feature-request.

Custom Business Logic

Hasura is excellent for reading/updating data. But what about custom business logic (e.g., communicating with a third party API or sending push notifications)? Hasura provides three options for this:

  • Event Trigger Webhooks

    • Configure Hasura to fire off a REST request when specific events take place (e.g., a new record inserted into a table)
  • Custom GraphQL Server + Schema Stitching

    • Set up a custom GraphQL server and merge it with Hasura’s GraphQL schema
  • Hasura Actions

    • Setup custom mutations directly in Hasura and these mutations proxy to your REST service

Event Trigger Webhooks

We began by using Hasura’s Event Trigger Webhooks exclusively. But we quickly ran into limitations with this approach. Event trigger webhooks get fired after data is successfully written to your database.

These webhooks worked well for asynchronous business logic, e.g., sending out an email or push notification after a record gets inserted or updated.

But Event Trigger Webhooks wasn’t going to cut it when it came to performing multiple operations as part of a more substantial transaction. An example of that is when you register a new account, which requires multiple inserts into different tables: e.g., one insert for the organization, one insert for the user.

That leaves you with the remaining two options.

Custom GraphQL Server + Schema Stitching

The idea here is that you build a custom GraphQL server that can do all of the things Hasura can’t do, and you merge both the custom GraphQL server and the Hasura GraphQL server into one.

Schema stitching was by far the most versatile option, but also the heaviest and most time-consuming. Our team was already well-versed in building GraphQL servers, so this felt like a natural course of action for us.

We began by writing custom mutations for the following types of APIs:

  • register (inserts records in multiple tables and sends a welcome email)
  • login (validates the user’s credentials and inserts a record in our session database)
  • logout (removes a record from our session database)

The only real limitation here is that we were unable to access the Hasura GraphQL data types from our custom GraphQL server.

Say we have the following user record defined in Hasura:

type user {
  id: uuid!
  email: String!
  firstName: String!
  lastName: String!
  updatedAt: timestamptz!
  createdAt: timestamptz!
}

^ We can’t access that record in our custom GraphQL server. It’s worth noting that this problem is explicitly addressed in Hasura Actions (more on that in a minute).

So because of this limitation, we started defining our mutation responses to return minimal data. Example:

type RegisterOutput {
  insertedUserId: String!
}

type Mutation {
  register(
    email: String!
    firstName: String!
    lastName: String!
    password: String!
  ): RegisterOutput
}

Hasura Actions

FULL DISCLOSURE: We haven’t yet started using Hasura Actions in production, as the feature was still a little rough around the edges at the time that we were investigating it.

That said, Hasura Actions seem to solve a lot of the pain points that the “Custom GraphQL Server + Schema Stitching” approach presents:

  • You’re able to define custom mutations directly in Hasura without rolling a custom GraphQL server
  • No schema stitching
  • You have access to Hasura GraphQL data types in your custom mutations!

It’s become pretty clear to us that we can eliminate our entire custom GraphQL server now that Hasura Actions has landed in a stable release.

Defining Our Schema

The Hasura Console, which is the graphical interface for defining your database schema, would make any SQL-aficionado proud.

Data & Relationships

You’re able to quickly define your database tables and their relationships using idiomatic SQL practices, such as foreign keys and indexes. These foreign keys give Hasura hints as to what GraphQL relationships may exist between your tables.

For instance, if you created the following two tables in Hasura:

organization
------------
id
name
updated_at
created_at

user
----
id
organization_id
first_name
last_name
updated_at
created_at

…and you set up a foreign key between organization.id and user.organization_id, Hasura detects that this is a 1-to-many relationship.

Hasura then provides you with suggestions to enable this relationship in GraphQL on both the user and organization records, which allows you to access the organization data on the user level or vice versa. Example:

query UserQuery {
  user {
    id
    firstName
    lastName

    # Grabbing the organization record from the user node!
    organization {
      id
      name
    }
  }
}

Naming Conventions

If you’re like us, you’re very particular about your naming conventions. Our Postgres database should have snake-cased tables and columns, while our GraphQL queries and types should have camel-cased fields.

Hasura has you covered. With the use of column aliasing, your database table can look like this:

user
----
id
organization_id
first_name
last_name

…while your GraphQL type can look like this:

type user {
  id
  organizationId
  firstName
  lastName
}

You can even rename root-level mutations:

# This is what an insert mutation looks like out of the box:
{
  insert_user(objects:[{ firstName: "Erica"}]) {
    returning {
      id
      firstName
    }
  }
}

# Renamed: 
{
  insertUser(objects:[{ firstName: "Erica"}]) {
    returning {
      id
      firstName
    }
  }
}

Database Migrations

Note that Hasura has made improvements to their migration workflows in the newly-released v1.2.0. I haven’t investigated these improvements yet, so I’ll post a follow-up article when I do.

You’re dealing with two types of data in Hasura:

  • Your core database schema (these are your tables, columns, foreign keys, and indexes)
  • Hasura’s internal schema (this is where things like GraphQL relationships, permissions, and event triggers live)

When using the Hasura Console (Hasura’s GUI) to make schema changes, every action you take in the console results in a newly-generated database migration file.

Let’s say I add a nickname column to the user table. Then I decide I don’t need that column, and I delete it. This mishap results in two migration files.

You may argue that we should be more careful about database schema changes. I’d generally agree, but it’s much harder to be careful when you’re using a GUI to design the database.

The result is almost always too many database migration files. Or inefficiently-created migration files, e.g., five separate migration files for altering the same table five times - the same thing can be accomplished with a single ALTER TABLE statement.

We tried to work around this issue:

Switching to “metadata-only” Migration Workflow

Instead of generating migration files, we started using Hasura’s metadata.json file to propagate changes to different environments. The metadata.json file can be exported from and uploaded to any Hasura instance. Once uploaded, Hasura applies all database and internal schema changes.

Unfortunately, this approach didn’t work out. We ran into too many conflicts when multiple developers would check in an updated metadata.json. It was also challenging to identify what changes were made, based on looking at one large metadata file.

NOTE: If you’re a one-person team, this might be a viable approach. It breaks down when you have more than one engineer making changes to your database.

Switching to an External Migration Tool

We started using Flyway to apply database changes to our core database schema. But this approach was hard to get right. Hasura’s internal schema relied on our core database schema and vice versa.

So we ran into a chicken-and-egg situation when we attempted to do things like drop a table. Sometimes you need the internal schema changes to be applied first; other times, you need your core schema changes to be applied first.

Our Solution

We ended up revamping our approach to migrations entirely in the end:

  1. We stopped using the Hasura Console to make changes
  2. We began manually writing all SQL migrations by hand

    • This forced us to define declarative, well-thought-out SQL statements for every change we wanted to make in our database
  3. We learned the Hasura metadata syntax and began manually writing migrations for the Hasura internal schema as well

As over-bearing as this approach seems, it fixed our migration woes overnight. Our migrations were now declarative and communicated clear intent. We were finally able to have productive conversations in our pull requests in regards to our database updates.

That said, we’re excited to look into the new migration workflow released in Hasura v1.2.0. It appears that they’ve taken a more declarative approach to migrations, so it’s great to know the Hasura team is cognizant of that particular need.

Community Support

I’ve communicated with the Hasura team via their GitHub Issues page. They’re highly responsive and cordial. The Hasura team has even fixed a couple of bugs that I’ve filed, and they’ve already released these bug fixes in the latest version of Hasura.

There’s nothing more you could ask for in an open-source project. I have no qualms about Hasura’s future, or it’s community support.

Conclusion

I’d undoubtedly choose Hasura again in a future project. There are some things I regret spending too much time on, like trying to figure out metadata-only or external-tool-based migrations.

But even then, I don’t think we could’ve rolled a custom solution as robust as Hasura in the allotted timeframe. We managed to bootstrap a real-time GraphQL backend and a brand new database, build two clients (React Web & React Native), and go live with a brand-spanking-new product deployed to Web, iOS, and Android in THREE short months.

Fast forward four more months, and we’ve yet to run into any issues. Upgrading Hasura has been a breeze. Our migration woes are over, and we’re looking forward to what the Hasura team cooks up next.

Stay tuned for future updates on:

  • Hasura Actions
  • Hasura’s new config v2 Migrations

Written by Richard Girges.

  • #technology