Last Week

Last week was the first time Shokken felt like it was brushing up against the real world. Someone reached out because they found the live QR-code demo on the website and wanted to use it for a real event, which forced a late-night hotfix and a whole new level of seriousness about reliability.

That experience made something obvious: I can’t keep treating “the backend” as one big shared sandbox. If I’m iterating quickly on features and infrastructure while real people are trying to use the system, I need guardrails.

Going Public Means the Backend Has to Grow Up

This week’s theme is simple: environments, environments, environments.

Google approved Shokken’s open testing track, which means the beta is now publicly available through the Play Store. The practical difference is huge: testers don’t have to fight through enrollment steps and permissions just to install the app. The less friction there is, the more likely it is that someone who’s curious will actually try it.

But there’s a corresponding responsibility on my side. If the app is easier to find and install, then the system behind it needs to be ready to receive real data and real traffic. Even “beta” is a different category than “alpha builds shared with a small group”.

Up until now, I’ve been running with a single backend that played three roles at once:

  • It was my integration environment (where changes land continuously).
  • It was my staging environment (where I could “test what’s about to ship”).
  • And it was effectively production (because it’s the only backend my testers could use).

That’s an acceptable level of chaos when you’re still experimenting, but it turns into a risk the moment anyone depends on it. If I break something in the backend, I don’t just break my own work—I break the app for everyone.

The Shape I’m Standardizing On: Integration → Staging → Production

The solution I settled on is three persistent backends:

  • Production: serves real users and real data. This is the “don’t touch it casually” environment.
  • Integration: tracks master continuously. This is where I move quickly and validate the latest code against a live backend.
  • Staging: sits between them. It’s promoted from integration, then frozen while it’s tested. Once it’s proven stable, it gets promoted to production.

Conceptually, that sounds straightforward. In practice, I discovered why even big companies sometimes end up without a “proper staging environment”: the database is only one piece of the system.

The moment you have multiple environments, every connected service also needs to be environment-aware.

The “It’s Just Three Backends” Lie

Spinning up parallel Supabase branches is not the hard part. Supabase makes it easy to create branches and mark them as persistent so they don’t get overwritten.

The difficulty is everything that’s attached to the backend:

Social login has to be configured per environment

Shokken uses Supabase Auth, and for native sign-in that means separate configurations for:

  • Google login on Android
  • Apple login on iOS

Each environment has its own URLs and callbacks. That means each environment needs its own set of keys and its own “this is the URL we’re allowed to redirect back to” configuration.

RevenueCat needs a clean split between sandbox and production

RevenueCat sits in the middle of app-store purchases and my backend entitlements. It also has a very clear distinction between sandbox and production traffic.

With multiple backends, that wiring matters a lot. I need to guarantee:

  • sandbox purchases map to integration/staging
  • production purchases map to production

Otherwise you can land in the nightmare scenario where a real purchase updates a non-production environment, or sandbox traffic accidentally grants real entitlements.

The website is now part of the backend surface area

The product website isn’t just static marketing pages anymore. The hero section includes a live QR-code demo that talks to the backend so a visitor can feel the “join via QR, see the state update” loop immediately.

Once the backend becomes environment-specific, the website deployment has to become environment-specific too. The demo needs to pull from the right backend, and the “public” site needs to be absolutely sure it’s pointing at the production backend—no mixing and matching.

App builds have to line up with backends

If production builds must only ever talk to production, then:

  • each build flavor needs a distinct backend URL + anon key (and any other config)
  • the CI pipeline needs to publish the correct combination without manual “did I remember to change the env var?” steps

This is where “multiple environments” becomes less about spinning up servers and more about system design: making it difficult to accidentally ship a build that points at the wrong backend.

What does it mean in English?

When you’re building an app, it’s tempting to have one shared backend because it’s simple. One URL, one database, one place where “the truth” lives.

The problem is that building the product and running the product are two different activities:

  • Building requires change, experiments, and mistakes.
  • Running requires stability, predictability, and safety.

Separate environments are how you do both at once.

Integration is where I move fast. Staging is where I prove the system works in a controlled way. Production is where people’s real data lives—and where I don’t get to “just try something” without thinking through consequences.

Nerdy Details

The core rule: promotions, not deployments

The biggest mental shift is that production doesn’t get “deployed to” directly.

The flow I’m building toward is:

  1. Integration updates continuously from master.
  2. When it’s time to cut something stable, integration is promoted to staging.
  3. Staging is tested while it’s frozen.
  4. When it’s trustworthy, staging is promoted to production.

That sounds like bureaucracy, but it’s really just a way to make the system less fragile. Production changes become a deliberate event, not an incidental side effect of “I pushed a commit”.

GitHub environments: the secret-management layer that makes this tolerable

GitHub Actions environments are the glue for this.

At a high level, I want each target to pull the right configuration automatically:

  • Android integration/staging/prod
  • iOS integration/staging/prod
  • web integration/staging/prod
  • backend integration/staging/prod (Supabase URLs, keys, etc.)

With GitHub environments, I can attach secrets and environment variables to the environment itself, then have workflows select the environment they’re deploying to. That removes a huge class of “oops, I renamed the secret and now production is using staging keys” failure modes.

One sharp edge I ran into is that GitHub secrets are effectively write-only: once a secret is stored, you can’t retrieve its value through the normal UI. That’s good for security, but it makes “move everything from repo-level secrets into per-environment secrets” more annoying than it sounds. The migration becomes a careful bookkeeping exercise: make sure you still have the original values somewhere secure, then re-enter them into the environment-specific scopes.

Website edge logic and the trap of parsing URLs

The web demo has a subtle coupling: some edge logic decides which frontend path you’re on by parsing the request URL (I have different subdomains for “join” vs “status display”).

That worked fine when there was only one canonical set of URLs.

As soon as you introduce preview environments and multiple backends, the URLs change. The edge logic needs to account for environment-specific hostnames instead of assuming “production domains only”. This week I registered additional subdomains so each environment has a clean set of URLs, and the next step is updating the edge function logic so it routes correctly across those hosts.

The deeper lesson is: if “which environment is this?” is encoded implicitly in the URL, you need to be extremely deliberate about what URL patterns exist and who can create them. Otherwise, it’s easy to accidentally point the system at the wrong place.

Why I’m adding friction to my own workflow

I also changed my repo setup to support stricter change control. I moved the project under an organization so I can use approvals and protection rules to prevent “accidentally push to the production promotion branch”.

I’m still doing trunk-based development—master is the integration branch, not production. But I now treat promotion branches (the ones that trigger production-facing deploys per target) as protected surfaces.

Yes, it’s red tape I created for myself. But it’s the kind that’s designed to prevent the embarrassing internet stories:

  • leaked secrets
  • a production app talking to the wrong backend
  • a database accidentally exposed because a table’s access rules weren’t reviewed

Promotions as an audit checkpoint, not just a version bump

I’m also using the promotion steps as checkpoints for “is this safe?” rather than “did it build?”.

The idea is that each promotion gate can include:

  • build verification for each target
  • checks that the right keys are being used for the right environment
  • security audits and key hygiene checks
  • database review, including row-level security (RLS) enforcement
  • custom “should this table be accessible with an anon key?” audits

That won’t magically solve security forever, but it makes it harder for obvious mistakes to slip into production unnoticed.

Next Week

Next week is about finishing the transition from “one backend” to the full three-environment setup—including getting the website deployment cleanly environment-aware.

After that, I’m shifting focus to the pricing and paywall experience. The app already offers generous free functionality (for example, QR self-join), but SMS notifications cost real money to operate. I need the in-app purchase flow to be clear about what you’re buying, and then I want to submit for production access on both the Play Store and the iOS App Store.

While those reviews are in flight, I’ll go back to marketing materials and store listing polish (screenshots, ASO, and the “make this feel trustworthy at a glance” work).