Last Week
Last week was the week I stopped pretending private polish was enough. A context-free tester put Shokken through its paces, confirmed that the app was mechanically stable, and exposed a different class of problem: several workflows still asked too much of a first-time user. That feedback pushed me toward the obvious next move, which was to stop circling production and actually begin the store-submission process.
I followed through on part of that immediately. The Google Play production-access application is now submitted, which means the Android side is officially in the queue. The iOS side is still a little behind because Apple asks for more of the story around the app: cleaner copy, more screenshots, and generally more evidence that the product is ready to be presented clearly. Alongside that store work, I made a change to my development loop that should matter far beyond this week: I finally gave my terminal workflow a way to drive the Android emulator directly for runtime smoke tests.
Closing the Last Manual Gap
This week’s real theme was not store paperwork. It was removing one more awkward handoff from development.
Until now, the workflow had a sharp boundary in it. I could design a change, implement it, run tests, and review the code. But once a feature crossed from “code that compiles” into “behavior that must be exercised on a screen,” I still had to step in manually. That is not unusual for mobile work. Runtime behavior is where many of the annoying problems live: navigation mistakes, bad state transitions, layouts that technically render but behave strangely, and crashes that only show up once a real screen flow is exercised.
So I finally wired in a bridge between the terminal workflow and the Android emulator.
The practical effect is straightforward. The workflow can now interact with the emulator through the normal Android tooling stack, capture screenshots of the current screen, interpret what is visible, and then continue the flow by issuing taps, swipes, and other device actions. It is not a live video feed and it is not some magical “understand the whole app instantly” system. It is a screenshot-by-screenshot loop. But that is enough to close an important gap.
Before this, the final step after implementation was often “now I need to go poke at the app myself.” Sometimes that was fine. Sometimes it was exactly what I should do anyway, especially for more subjective product questions. But for straightforward smoke tests, it was repetitive overhead. If I already know the flow I want checked, there is no reason that verification has to begin and end with my own thumbs.
That matters because speed in development is not only about typing code faster. It is also about reducing context switches. Every time I have to stop the coding loop, move into manual runtime verification, then come back and explain what I found, I pay a tax. The new setup does not remove that tax entirely, but it reduces it enough to be useful. I can treat runtime smoke testing as part of the same implementation pass instead of as a separate ceremony.
What does it mean in English?
This week I made it possible for my development setup to test basic app flows on an Android emulator without me manually tapping through every screen first.
That does not mean mobile development suddenly became self-driving. It just means the boring part got smaller.
Previously, after making a change, I still had to open the emulator and run through obvious checks myself: does this screen open, does that button work, does this flow crash, does the state update the way I expect? Now the workflow can do a first pass on those checks by looking at one screenshot at a time and interacting with the emulator through standard device commands.
For something as UI-heavy as Shokken, that is valuable. The app lives or dies on screen behavior, not just on whether the Kotlin code compiled cleanly. If I can catch obvious runtime issues earlier and more consistently, I get to spend more of my attention on product decisions instead of repetitive verification.
Nerdy Details
Why this gap mattered so much
The uncomfortable truth about mobile development is that “tests pass” and “the feature works” are only loosely correlated.
Unit tests can validate logic. Integration tests can validate data movement. Static analysis can catch a lot of mistakes before runtime. But the app still has to survive actual use inside a real operating environment, with screen transitions, state restoration, asynchronous updates, and all the other details that do not fully reveal themselves in compile-time checks.
That last layer has traditionally forced a human handoff in my workflow. After code review, I still needed to launch the app, navigate to the changed surface, and confirm that nothing embarrassingly obvious was broken. That step is necessary, but it is also exactly the sort of structured, repetitive work that benefits from tooling support.
What I added this week is useful because it moves runtime verification closer to the point of change. The workflow no longer has to stop at “the code looks right.” It can continue through “the app at least survives a basic pass through the feature.”
The emulator bridge is simple, and that is fine
The important detail here is that the setup is not sophisticated in the sci-fi sense. It does not watch a live 60 FPS stream. It does not continuously ingest the full visual state of the device. It works in a slower, more deliberate loop:
- capture the current emulator screen
- interpret the screenshot
- decide the next action
- send a device command such as tap or swipe
- repeat until the smoke test is done
That sounds primitive because it is primitive. But primitive is not the same thing as useless.
For smoke tests, the job is not to model every nuance of a human operator. The job is to answer a smaller set of questions:
- Does the screen I expect actually appear?
- Can the next action be found and executed?
- Does the flow continue without crashing?
- Does the resulting state look broadly correct?
That is enough to catch a surprising number of bad outcomes. The biggest wins are the obvious ones: broken navigation, missing elements, failed actions, and runtime issues that were never visible from code review alone.
Why this is especially useful for Shokken
Shokken is not a backend-heavy admin console where most of the product value is hidden behind APIs. It is a mobile tool for hosts and operators, which means interaction design is the product. If a queue action is too obscure, if a button label causes hesitation, or if a screen transition fails, that is not a minor cosmetic issue. That is the work itself going wrong.
The more of the product logic that sits on the surface, the more important it becomes to test the surface as part of the implementation loop.
That does not mean I can outsource all judgment to tooling. A smoke test can tell me whether the path functions. It cannot fully answer whether the path is humane, legible, or well designed. Those are different questions. But there is still real leverage in being able to say: before I look at the finer UX questions, let me first confirm the app can walk through the basics without falling over.
Planning discipline matters even more now
One subtle point from this week is that better runtime tooling does not reduce the need for clear planning. It raises it.
If I want the workflow to carry a task through implementation and then into an emulator pass, the task itself needs to be well formed. The expected behavior has to be specific. The target flow has to be clear. The issue description has to be good enough that “test the feature” means something concrete rather than vague hope.
That is why I do not see this as replacing review or thoughtful issue writing. The sequence still matters:
- define the change well
- review the approach
- implement it on an isolated branch
- run code review and normal checks
- exercise the feature inside the app
The improvement is that the last step is now easier to include systematically instead of only when I have the patience to do it manually right away.
Store submission is still moving, just more slowly on iOS
The other thread this week was the production rollout itself.
Android is now in Google’s hands for the first production-access pass. I am not assuming approval on the first attempt, because stores are stores and first submissions tend to reveal something you forgot to say, forgot to show, or forgot to classify correctly. But the important part is that the process has started.
iOS is next. Apple wants more metadata density than Google does, and that means I need to finish the supporting material: copy, screenshots, and the rest of the presentation layer that makes the app legible in the store before anyone even installs it.
That work may look less technical from the outside, but it is still product work. A confusing listing filters out potential testers before the app gets a chance to prove itself. If I am serious about launching, I have to treat store presentation as part of the system rather than as decorative paperwork.
Next Week
Next week is about finishing the iOS production application and staying responsive to whatever comes back from Google on the Android side.
If there is room beyond that, I want to keep tightening the public-facing material around the app: stronger store copy, better screenshots, and better website visuals. The backend is ready, the beta builds are live, and the runtime loop is stronger than it was a week ago. At this point, the work is less about invention and more about reducing the remaining friction between “ready enough” and actually public.