Last Week

The landing page is beautified. However, I spent the majority of my time exploring a power AI assistant to rapidly increase my throughput.

Tools used:

What does it mean in English?

I asked an AI agent to write the app code for me instead of me doing it myself.

Nerdy Details

Asking an AI to make an app is not as straightforward as I thought. But it’s also immensely powerful. A few things to consider:

  • If you already have a code base, the AI will need to read a bunch of your existing code to understand the context before becoming capable of providing meaningful responses.

  • However, the more existing code the AI reads, the larger its context window becomes. This in turn will increase the processing time for a response in addition to costing a lot of money.

  • If you plan to run a local LLM to help with coding, you may run out of VRAM before you realize. The current state-of-the-art open-source LLM for coding is the QwQ 32B model, which takes up around 30GB of VRAM before adding any context. There is no single consumer-grade GPU on the market that can handle this model with a moderate context window size. You’d have to go enterprise-grade.

  • If you plan to use cloud-based LLM to help with coding, keep in mind that you get charged for tokens in/out in addition to context window caching. So the larger the context window, the dearer the bill, holding everything else constant.

  • AI based on LLM are not agents. In other words, they can generate code for you but you’d still have to implement yourself.

  • Code generated by LLMs in all likelihood can contain errors. You will need to know what to look for to fix them. AI would not prompt you.

  • LLM cannot query the latest documentation when generating code. Their knowledge about any particular technology, framework, convention, and so forth are frozen in space in a snapshot based on the dataset they are trained on. Claude 3.7 Sonnet’s data are up-to-date as of the end of 2024. Any new development after that it just would not be aware. It is your responsibility to find the up-to-date documentation and fix any implementation errors LLMs may have produced.

  • Prompt engineering is of paramount importance when it comes to the end code quality LLM generates. Below is an example of the prompt I used to generate the code I needed.

      Make an app for [something]. Make it work on iOS, Android, and desktop. The app is called [app name]. Use "project.id" as the project ID.
    
      [describe in painful detail how this app should work]
    
      App requirements:
      The app should support real-time updates.
      The app should dynamically adjust to different screen sizes including phone screen, tablet screen, and computer screen.
      The app should support dark mode.
      The app should provide input validation.
      All email should be sent via Supabase's Edge Function.
      Make the app visually appealing and adopt the best Kotlin Multiplatform practices. Use Material3 components and color scheme and ensure that it is visually appealing and relevant to the [your chosen] industry.
      Provide informational offline support for the app.
    
      Use the following technologies:
      Kotlin Multiplatform
      Compose Multiplatform
      Compose Material3 UI Library
      Voyager Navigation Library
      Supabase Authentication Service (Email OTP and deep link method)
      Supabase Postgres Database Service
      MVVI design pattern
    
      For the Supabase integration, please provide a general implementation guide.
      Create mockup preview screenshots of each screen.
    
      Ask me any clarifying questions you may have before proceeding. Do not assume any technological or architectural decisions if you need them but are not provided above.
    

Next Week

Try to actually implement the code Claude generated for me for the app I want to write.