Last Week

Last week I recorded the update from a parking lot while the houseguests I was hosting rotated through the city. I used the downtime to map Google Play’s closed testing requirements, built a checklist for the fourteen testers I need to keep the track alive, and rewrote my dogfooding playbook so the first bug hunts have scripts instead of improvisation. Most of my energy went into people logistics: scheduling those friends and family members for thirty-minute screen-sharing sessions, documenting how to install from the Play Console instead of sideloading, and figuring out how to capture their feedback without asking them to learn SQL. The Supabase notification loop and cron drain proved stable, which meant the backlog that remained was full of visual papercuts and interaction snags. I wrapped week 40 knowing the infrastructure could ship builds every day, but the interface still needed the kind of polish that only comes from sitting back down at the keyboard uninterrupted.

Polishing Bottom Sheets, Shipping Without Touching A Button

With the apartment quiet again I opened the backlog triage doc and spent the week chain-fixing the UI bugs that made dogfooding feel brittle. The biggest offender was the maze of bottom sheets—host actions, party details, seatings, and SMS log viewers all stack in Shokken—and I kept finding cases where one sheet would appear half-expanded or would refuse to dismiss when another appeared. I rewired those flows so every sheet drives from a single state machine, standardized the drag handles, and cleaned up the typography so the focus stays on the parties in the queue. I also tuned empty states, restored sticky headers that were collapsing on scroll, and closed the loophole that let the wait estimate picker show stale values after a cancelled hold. By midweek the app felt like it respected the host instead of second-guessing them.

That polish sprint paired nicely with the automation dividends I invested in months ago. I hardened the GitHub Actions workflow so instrumented tests run before every upload, secrets rotate out of the cache between jobs, and release notes get attached automatically. Nightly at 16:00 UTC a fresh dogfood build now pushes to the internal track without me touching a laptop, and every Tuesday the workflow tags an alpha candidate, opens a tracking branch, and posts the QA checklist straight into the repo. I ran three cycles this week to prove the loop: the tests stay green, Google Play ingests the bundle without manual approval, and I can keep building features while the pipeline handles delivery.

The cherry on top was hearing from people actually watching these updates. Nicholas Price suggested I start each video with a quick description of Shokken so new viewers aren’t lost, and that feedback nudged me to script a channel trailer that explains the product, why I’m obsessed with waitlists, and how the beta is going to roll out. I drafted the outline while GitHub Actions churned and slotted time after the test build lands to film it. With a long trip looming I’m grateful to have the app caught up to its ambition: the UI is no longer a guilt-tripping TODO, the build system will keep shipping while I’m away, and I can focus on onboarding testers instead of wrestling bottom sheets.

What does it mean in English?

Shokken is finally ready for real testers because I just spent the week sanding down the parts that used to confuse people. The app lets a host manage a restaurant waitlist, and all of the actions open bottom sheets that were fighting each other. I fixed the layouts so they open and close cleanly, show the right information, and make it obvious what to do next. At the same time I confirmed that my automated build pipeline works without me: every day a new version uploads to Google Play, and every Tuesday a special build is tagged for closer review. That gives me confidence to travel without pausing progress. I’m also responding to feedback by planning a quick introduction in each video and a channel trailer so newcomers understand what Shokken is before I dive into the weeds.

Nerdy Details

Cataloging the bottom sheet failures

I started Monday by replaying every bug the friends I recruited flagged during our dry runs. Most of them had the same root cause: multiple bottom sheets sharing the same ModalBottomSheetState with different lifecycles. When a host opened the party details, the seating actions sheet stayed half visible and the swipe gesture would toggle between them. I screenshotted each failure, noted the reproduction steps, and turned the 36-line QA checklist from last week into a state transition matrix. The matrix listed every sheet, the event that should summon it, and the expected destination when an action completes. That exercise exposed redundant sheets (party details vs. edit party) and missing dismiss events (especially when the SMS log sent a reminder). By noon I had a kanban column titled “sheet debt” with fourteen cards, which became the focus for the rest of the week.

Refactoring the sheet controller

The fix required pulling the sheet orchestration out of individual composables. I created a dedicated controller that owns a single ModalBottomSheetState, exposes a strongly typed HostSheet enum, and handles sequencing so only one sheet can be visible at a time. The key was to suspend until a sheet fully hides before switching context; otherwise Compose would try to animate both instances. The new controller looks like this:

@Stable
class SheetController(
    private val scope: CoroutineScope,
    private val state: ModalBottomSheetState
) {
    private val _active = MutableStateFlow<HostSheet?>(null)
    val active: StateFlow<HostSheet?> = _active.asStateFlow()

    fun present(sheet: HostSheet) {
        scope.launch {
            if (state.currentValue != SheetValue.Hidden) {
                state.hide()
            }
            _active.value = sheet
            state.show()
        }
    }

    fun dismiss() {
        scope.launch {
            if (state.currentValue != SheetValue.Hidden) {
                state.hide()
            }
            _active.value = null
        }
    }
}

@Composable
fun rememberSheetController(
    state: ModalBottomSheetState = rememberModalBottomSheetState(skipPartiallyExpanded = true)
): SheetController {
    val scope = rememberCoroutineScope()
    return remember(scope, state) { SheetController(scope, state) }
}

Existing screens now request the controller via dependency injection instead of instantiating their own sheet state, so declarative updates stay predictable. Because everything flows through a single StateFlow, I can drive analytics events and test assertions from one place.

Coordinating modal hierarchies

Once the controller existed I rewired each screen to describe its sheet payload as a sealed hierarchy. HostSheet.PartyDetails carries the active party id, HostSheet.AssignTable includes the selected floor section, and so on. That allowed me to centralize focus management: when a sheet opens I pre-load the view model, set the BackHandler priority, and register the appropriate semantic test tags. I also applied Modifier.zIndex to guarantee the scrim stack order stays consistent, and made sure the seat assignment keyboard uses WindowInsets.ime so the sheet doesn’t jump when the on-screen keyboard appears. The cleanup eliminated the flicker that QA reported and made the transitions smooth even on my aging Pixel 5 test device. Most importantly, it removed the class of bugs where two sheets fought for the same pointer input channel.

Regression protection with snapshot tests

I do not want to re-run this bug bash every time I adjust typography, so I wired new UI tests around the controller. The nightly workflow now runs a Compose instrumentation suite that exercises every sheet combination the matrix captured. For visual confidence I rely on Shot to capture golden screenshots with the sheets at their fully expanded, half expanded, and hidden states. I added a helper that sets deterministic content heights so the baselines actually line up across CI and local runs. The tests already paid off: when I tweaked spacing on the wait estimate picker, the snapshot diff showed the drag handle offset had regressed by 8 dp, which would have reintroduced the animation jitter. Catching that in CI saved me another evening of manual retesting.

GitHub Actions cadence

The automation side got just as much attention. The nightly job now runs on a cron scheduled for 0 16 * * * which lines up with 9 a.m. in my time zone. It fans out across an Android build matrix so I compile both the dogfood and alpha variants, run unit tests, execute the Compose instrumentation suite on Firebase Test Lab, and only after all of that succeeds do I assemble the bundle. To keep Secrets Manager happy I rotate the Play Store service account key weekly and load it into GitHub as an OIDC token at runtime instead of storing JSON in the repo. The workflow uploads the dogfood build to the internal track, posts the version code to Slack, and attaches the markdown changelog I generate from conventional commits. The whole process takes fourteen minutes, which is fast enough that I can kick off a run before lunch and see it finish before my coffee cools.

Guardrails for Tuesday alpha

Tuesdays now have a special lane. The cron kicks off the same pipeline but with an extra job that tags the repository with alpha-${date}, opens a branch named qa/${date}, and creates a GitHub issue that links all of the validation tasks. That branch is protected so I can only merge changes that relate to test fixes; anything else has to wait for the next cycle. I also attach the latest telemetry snapshot from Supabase so testers can confirm analytics still register their sessions. After the alpha build hits Google Play I run a short manual script: install from the Play Console link, verify sign-in, create two parties, seat one, cancel the other, and confirm Resend sends the SMS. Documenting that loop means anyone helping while I travel can step in without guessing.

Preparing for human testers

Because the beta cohort is almost ready, I drafted the scripts they will use during their first sessions. Each tester gets a scenario card that lists a fake restaurant name, inventory limits, and the kinds of parties they should simulate. The app now seeds those demo parties so they can jump straight into the queue without hand-typing guests. I also embedded a support link inside the profile screen that opens a pre-filled email with device info and the last ten log lines, which will make debugging feedback much faster. Finally, I added analytics events for every sheet presentation so I can watch for churn. If I see people backing out of the seating sheet repeatedly, I will know to revisit the design before the next release.

Content pipeline and messaging

The feedback from Nicholas Price arrived at the perfect time. I scripted a two-minute cold open that explains why Shokken exists, the pain points it fixes for hosts, and what makes the waitlist flow different. I also added a reusable snippet to my teleprompter notes so each weekly video starts with a single-sentence refresher before diving into the deep cuts. While I still record everything in one take, having that script will make the upcoming channel trailer a lot cleaner. I scheduled it for the week the test build goes live so I can point new viewers toward the app and the signup form. Seeing the subscriber count climb to twenty-six lit a fire under me to keep the narrative coherent.

Travel mode contingencies

I leave the country soon, so I spent Friday making sure progress will not stall. I installed Android Studio on my travel laptop, synchronized dotfiles, and confirmed the project still builds on a smaller screen. I wrote a “flight checklist” in Notion that covers everything I need to do before boarding: kick off the nightly build, check Play Console vitals, and clear the GitHub inbox. I also set up a shortcut that forwards build failures to my email in case I’m away from Slack. The plan is to focus on lighter tasks—marketing copy, support docs, the channel trailer—while still recording weekly updates from whatever city I’m in. The automation work means I can do that without feeling like I’m abandoning the product.

Next Week

Next week is about proving the polish sticks. I’ll keep running dogfood sessions each morning to look for any regressions in the bottom sheet controller, finish the scripted walkthrough that will onboard the fourteen testers, and merge the copy edits so the in-app support link points to fresh documentation. I also want the channel trailer recorded before I get on a plane, which means carving out a block to rehearse the intro Nicholas suggested and stitching in footage of the host workflow. If everything stays green, the Tuesday alpha build should be the one I share with the first external testers.