@moneypot/hub
is built on top of Postgraphile v5 which automatically turns a Postgres schema into a GraphQL API.Any non-trivial game needs its own database tables. This guide explains how to add custom tables to your hub implementation.
When you use the @moneypot/hub server, it creates several Postgres schemas with different visibility levels:
Schema | Purpose | Exposed to GraphQL? |
---|---|---|
hub | Core tables exposed to GraphQL | Yes ✅ |
hub_hidden | Internal tables and functions | No |
hub_secret | Security-sensitive tables | No |
For your own application, follow this convention by creating:
app
schema for tables you want to expose via the GraphQL APIapp_hidden
for internal tables that shouldn't be directly accessibleapp_secret
for security-sensitive data that needs strict protectionThe app_postgraphile
database user will only have access to the app
schema, ensuring proper separation of concerns.
app_postgraphile
userIf your table is in the app
schema that's exposed to the startAndListen({ schemas: ['app' ], ...})
, then you will want to let the app_postgraphile
user read it.
grant select on app.bet to app_postgraphile;
The recommended approach for managing database schema changes is to use the built-in automigration system:
For detailed instructions, see the Database Migrations guide.
Once you've created your custom tables, you'll need to expose them through GraphQL:
Learn more about this process in the Extending GraphQL guide.
Here's an example of how you might create a dice bet table with all this in mind:
drop schema if exists app cascade; create schema app; grant usage on schema app to app_postgraphile; CREATE TABLE app.dice_bet ( id uuid primary key, wager float8 not null, net float8 not null, -- negative if lost, wager*(target-1) if won target float8 not null, -- 2.0 means user was going for 2x payout actual float8 not null, -- e.g. actual roll was 1.8x (loss) or 3x (win) currency_key text not null, -- 'HOUSE', 'BTC', etc. -- let us easily look up the lastest bets for users, casinos, experiences user_id uuid not null references hub.user(id), casino_id uuid not null references hub.casino(id), experience_id uuid not null references hub.experience(id), -- currencies are specific to each casino foreign key (currency_key, casino_id) references hub.currency(key, casino_id) ); -- GRANT grant select on table app.dice_bet to app_postgraphile; -- RLS alter table app.dice_bet enable row level security; create policy select_dice_bet on app.dice_bet for select using ( hub_hidden.is_operator() or user_id = hub_hidden.current_user_id() );
For example, create table app.bet
instead of create table app.bets
.
They tend to make more sense in functions and graphql logic, and they are automatically inflected into plural forms in GraphQL.
app
schemaThis way, all access is prohibited by default until you write RLS policies that control otherwise.
You also usually want to allow access for these cases:
hub_hidden.is_operator()
lets your api key access everything. Ex: this lets your admin dashboard see all records of each table.user_id = hub_hidden.current_user_id()
lets the user access their own records.alter table app.bet enable row level security; create policy select_bet on app.bet for select using ( hub_hidden.is_operator() or user_id = hub_hidden.current_user_id() );
This makes your schema changes incremental and easy to reverse manually.
UUID v7 are chronological, so they are sortable. But they also have a random component so that your rows can't be enumerated.
SQL hub_hidden.uuid_generate_v7()
will generate one:
create table app.example( id uuid primary key default hub_hidden.uuid_generate_v7() );
You don't need a created_at timestamptz
column because you can extract the timestamptz from the UUID v7 with: hub_hidden.extract_timestamp_from_uuid_v7(uuid)
.
This is especially important for user_id
, casino_id
, and experience_id
columns.
Of course, it improves query performance when you're filtering by foreign keys.
But it also generates the GraphQL that lets you query through foreign key relations. For example:
query { userById(42) { diceBetsByUserId { nodes { id wager net } } } }
TODO
(While these docs don't plan on doing a deep dive into writing concurrency-safe SQL code, this is such an important topic that we will at least scratch the surface here.)
Any time you want to update the database, especially based on information that's already in the database, you need to be wary of race conditions if your logic were run in parallel by multiple queries at the same time.
For example, here's a common makeBet
implementation:
If you write this naively, then it can fail in a few different ways:
Imagine if 100 of these queries were run at the same time. Every query would see the same user balance. The user could end up making bets that they couldn't afford if the queries were instead run sequentially, one at a time.
One way to handle this problem is to use Postgres' SERIALIZABLE
isolation level for transactions which locks the table for the duration of the transaction and lets you write SQL as if it were run sequentially.
While this has significant performance implications, one way you can use it is to start with SERIALIZED
transactions first while you're getting things working and then transition to better row-locking strategies.
Another way to handle this problem is to use row locking (SELECT ... FOR UPDATE
) to lock the row for the duration of the transaction.
The goal is to lock rows
FOR UPDATE
)FOR UPDATE
)When using FOR UPDATE
, you need to consider the order in which you lock the rows to avoid deadlocks.
If some queries lock row A and then row B, but other queries lock row B and then row A, they will "deadlock" because they are waiting for each other to release the lock.
Here are some tips:
Lock less-contended rows (like user balance) before more-contended rows (like bankroll).
Less-contended rows are rows that are accessed by fewer queries, such as those specific to a single user. More-contended rows are accessed by many queries from different users.
Lock rows in the same order for each transaction.
A classic deadlock scenario is when you transfer funds between two users, e.g. transferFunds(sender, receiver, amount)
, and two users transfer funds to each other at the same time.
If both queries lock sender
s balance first, then they will deadlock because they are waiting for each other to release the lock.
Instead, you could write the transaction so that it always locks the user rows in the same order based on user ID:
async function transferFunds( senderId: string, receiverId: string, amount: number ) { // Sort the IDs senderId to maintain a consistent locking order const [firstId, secondId] = senderId < receiverId ? [senderId, receiverId] : [receiverId, senderId]; return withPgTransaction(pool, async (pgClient) => { // Lock the rows await client.query( "SELECT balance FROM users WHERE id = $1 FOR UPDATE", [firstId] ); await client.query( "SELECT balance FROM users WHERE id = $1 FOR UPDATE", [secondId] ); // Here, we know that any other transaction that wants to lock these // rows will block until our transaction is complete, so we are safe // to query the balances. }); }