Last Week
Last week I finally coaxed the Supabase cron job into draining the email outbox without manual babysitting. I replayed 500 queued notifications, watched the Kotlin client hand them off to Postgres, and confirmed Resend responds within the same edge invocation. With that loop stabilized the admin dashboard now reflects message success within seconds, and I officially declared the infrastructure feature complete. I did try to finish the Resend webhook, but Supabase keeps rejecting the callback with a 401 even after I disabled JWT enforcement. Rather than sink another sprint into it, I parked the webhook and pointed my attention toward polish: empty-state onboarding, clearer status badges, and guardrails for the first five minutes in the app. Week 39 closed with Shokken capable of running a restaurant waitlist on its own, and a to-do list packed with UX refinement instead of infrastructural firefighting.
Recruiting The Beta Flight
This week looked nothing like a normal sprint. I had out-of-town guests camped in my apartment, so instead of hunkering down in the office I recorded this update in a parking lot while waiting to chauffeur everyone to their next activity. Shipping time was near zero, but it forced me to step back and look at Shokken as a product instead of a codebase. I needed that pause to decide what “feature complete” actually means once real restaurants touch the app.
Between errands I mapped the path from solo dogfooding to a proper closed test. Google now demands fourteen active testers before an app can graduate to production, and they verify that engagement through crash reports, Play-integrated feedback, and usage telemetry. I combed through the policy docs, built a compliance checklist, and rewrote my internal launch notes so every tester invitation includes instructions for installing through the Play Console, submitting feedback, and opting into analytics.
The visiting friends became my first recruits. Before they fly home I’m enrolling them in the test track, pairing each person with a specific restaurant scenario, and scheduling thirty-minute bug hunts that I can observe over screen share. I’m also using them to shake out my own dogfooding regimen: running the waitlist during dinner at home, staging fake rushes, and practicing how quickly I can identify a stuck notification without diving into SQL.
The overall roadmap stays intact. I’ll spend the coming two weeks fixing the issues we discover, keep the test cohort engaged so their Play usage stats stay green, and cut a publishable beta build before I leave the country in late October. Once travel kicks off the emphasis shifts to polish, analytics, and documentation, but this week locked in the human side of the plan—Shokken now has an actual flight crew lined up for the next milestone.
What does it mean in English?
In plain language: Shokken already handles the basics of a restaurant waitlist, and I didn’t add new features this week. Instead I made sure the app is ready for real people to try it. Google Play won’t let me ship a new product unless at least fourteen testers actively use it for a few weeks, so I’m recruiting friends and family to act as my first restaurants. I’m writing checklists for them, recording how they install the beta, and rehearsing the typical dinner-rush scenarios myself. That prep work means the next sprint can focus on fixing the bugs they uncover and smoothing out the onboarding experience instead of scrambling at the last minute. If all goes well, the app will enter closed testing later this month so I can gather feedback while I’m traveling.
Nerdy Details
Google Play Closed Testing Checklist
Google’s latest publishing policy now blocks new waitlist apps like Shokken until at least fourteen real people install the beta, launch it multiple times, and submit Play-integrated feedback. I spent an hour combing through the 2025.08 release notes for the policy center and distilled the gating items into a living checklist. The must-haves: invite-only closed testing, testers added via email that matches a Google account, analytics that report daily active sessions, crash-free user percentages above 80%, and a feedback link wired to the Play Console form. I also confirmed that dogfooding counts toward the engagement metric as long as I distribute through the closed track instead of sideloading builds. That means every personal device and every friend I recruit has to install from the Play Store link so their usage is counted.
Compiling the checklist uncovered a few gaps. I was missing a policy page that explains data collection, so I wrote a terse privacy statement for the beta onboarding email and linked it to a static page hosted on endian.dev. I also added a reminder to collect tester zip codes; Play now flags closed tests where all accounts come from a single metro area. With those details captured I can confidently invite people without worrying about an approval surprise later.
Internal Dogfooding Build Variant
Shokken already uses Kotlin Multiplatform, so I maintain a shared commonMain
module for business logic and separate Android and desktop front ends. To keep dogfooding honest I created a dedicated Android build type named dogfood
that inherits from debug
but ships with the same ProGuard rules, feature flags, and signing config as the closed track bundle. The goal is to run the exact code that testers will receive while still exposing a few diagnostic overlays that help me reproduce issues quickly.
android {
buildTypes {
getByName("debug") {
applicationIdSuffix = ".dev"
resValue("string", "shokken_build_label", "Debug")
}
create("dogfood") {
initWith(getByName("debug"))
matchingFallbacks += "debug"
applicationIdSuffix = ".dogfood"
manifestPlaceholders["shokken.betaMode"] = "true"
isDebuggable = true
isMinifyEnabled = false
resValue("string", "shokken_build_label", "Dogfood")
}
}
flavorDimensions += "dist"
productFlavors {
create("closedTrack") {
dimension = "dist"
applicationIdSuffix = ".beta"
}
}
}
That configuration lets me generate two artifacts from the same commit: a dogfooding APK I can sideload during private rehearsals and the Play-distributed bundle that includes Play Integrity metadata. Inside the Compose UI I read shokken_build_label
to render a thin banner that says “Dogfood” or “Beta” so screenshots stay traceable. I also gate debug-only tooling behind the shokken.betaMode
placeholder so testers never see crash-to-menu buttons or SQL inspectors.
Instrumentation And Telemetry
Closed testing only counts if Play sees active sessions, but I want richer application insights than the console provides. I extended my Supabase schema with two tables: tester_sessions
for high-level usage stats and session_events
for granular actions like “seated party” or “notification sent.” Each row stores the anonymized tester id, build label, and a hash of the restaurant name so I can correlate issues without storing private details. I also added an opt_in_version
column so I can prove that every tester accepted the beta privacy policy before their first check-in.
On the client, I wrapped the AppLifecycleTracker
that already powers analytics with a new SessionRecorder
. It batches events in memory and posts them to Supabase via Ktor when the user leaves the waitlist screen, relying on the same service role the app uses for email dispatch. Here’s the core of that recorder:
class SessionRecorder(
private val supabaseClient: SupabaseClient,
private val deviceIdProvider: DeviceIdProvider,
) {
private val buffer = mutableListOf<SessionEvent>()
fun record(event: SessionEvent) {
buffer += event.copy(recordedAt = Clock.System.now())
}
suspend fun flush() {
if (buffer.isEmpty()) return
supabaseClient.insert("session_events", buffer.map { it.toPayload() })
buffer.clear()
}
}
Every tester build defaults to flushing in 15-second intervals so the sessions show up quickly inside the metrics dashboard I keep in Retool. That visibility will help me spot engagement drop-offs long before Google does.
Guided QA Sessions
Because my friends are not restaurant hosts, I have to choreograph the test runs so their feedback is useful. I wrote four guided scripts that simulate the busiest parts of service: a brunch rush with many walk-ins, a dinner with preorder requests, a scenario where the internet drops, and a late-night crowd flipping between party sizes. Each script fits a thirty-minute window and starts with a timer that reminds the tester to submit in-app feedback at the halfway mark—Play requires evidence of interactive use, not just idle installs.
During the sessions I’ll be on a video call with screen sharing turned on. I’m recording the screen on my side with OBS so I can review precise tap sequences later. The dogfood build displays a floating panel that logs the last ten Redux actions and network calls; I capture that stream for the bug report. After each session I’ll transcribe the tester’s verbal notes, link them to the session recording, and file them directly into the defect backlog. The structure keeps the sessions lightweight while guaranteeing they hit all the flows Google reviewers will poke at.
Defect Intake And Triage
I migrated my old scratchpad of bugs into a formal triage pipeline. Every issue starts in a Beta Intake
database in Notion with required fields for reproduction steps, affected build label, tester id, and whether the problem blocks the “seat party” flow. The form automatically generates a shareable token that I paste into the Play Console response so testers can track progress without seeing the entire backlog.
Once an issue is validated I sync it into Linear where the engineering work happens. The integration tags tickets with closed-track
so I can build release notes that list every fix shipped to testers. I also updated the bugreport
Supabase function to accept attachments. That allows testers to upload screenshots directly from the app; the function stores them in an S3 bucket and emits a signed URL back to the intake form. By forcing myself to collect structured data now, I cut the time from “reported” to “fixed” dramatically and ensure nothing gets lost when I’m juggling travel.
Release Automation Before Travel
With a trans-Pacific flight looming I can’t rely on desk time to cut releases. I wrote a scripts/publish_closed_track.sh
helper that builds the Android App Bundle, signs it, uploads it to the Play Developer Publishing API, and tags the git commit. The script also runs the Compose UI test suite and the Supabase function integration tests before uploading anything so I can trust the artifact even if I’m rushing.
#!/usr/bin/env bash
set -euo pipefail
./gradlew clean \
:androidApp:bundleClosedTrackDogfoodRelease \
:common:allTests
gcloud beta android-publisher bundles upload \
--app=$SHOKKEN_APP_ID \
--bundle=androidApp/build/outputs/bundle/closedTrackDogfoodRelease/app.aab \
--track=closed \
--release-notes=artifacts/release_notes.txt
The release notes file is generated from the Linear changelog and the Notion intake board, so testers see a crisp summary the moment the build rolls out. Automating the deploy means I can cut a hotfix from a hotel Wi-Fi connection without hunting for documentation or re-running manual steps.
iOS Parity And Shared Code Health
Even though the Google Play requirement drives this sprint, I have to keep the iOS branch healthy so I can mirror the beta quickly. I spent part of a late night rebasing the KMM shared module onto Kotlin 2.0.20, aligning the coroutine and serialization versions between Android and iOS targets. That upgrade reduced my native interop shims by letting me rely on the new Swift-specific FlowPublisher
, simplifying the Combine bridge that powers live data on iPad.
On the tooling side I wired the same session recorder into the SwiftUI shell. The multiplatform layer now exposes a BetaDiagnostics
protocol that both platforms implement, so testers on TestFlight will emit identical telemetry once I open that track. I also regenerated the Detox end-to-end tests that cover the guest check-in flow; the scripts now run against the shared staging database instead of a mocked backend. Staying ahead of parity keeps the iOS release from becoming a rewrite when I return from travel, and it gives me confidence that dogfooding feedback applies to both platforms.
Next Week
Next week is about execution rather than planning. I’ll run the guided QA sessions with my visiting friends, collect their bug reports, and knock out any defects that block the core “add guest → notify → seat” loop. I also want to finish the onboarding empty state and copy updates that I scoped in Week 39 so the first-run experience feels welcoming to the testers. If schedule allows I’ll pilot the nightly Resend reconciliation job behind a feature flag, but the primary goal is simple: keep the cohort engaged so Google sees active usage, and ship a closed-track build that I can trust while I’m on the road.