Last Week
Last week I successfully completed the Revenue Cat implementation that had been challenging me for the past three weeks. Now when users tap on the subscription tile in the navigation panel, the paywall displays correctly - a major milestone for the app’s monetization strategy.
The bulk of my time was spent on significant tooling improvements, specifically around AI coding agents and development environments. I implemented a sophisticated setup using Model Context Protocol (MCP) with local vector databases to provide my AI coding assistant with up-to-date documentation, dramatically reducing compilation errors from nearly 100% to just 20-30%.
What does it mean in English?
Think of AI coding assistants like having a very smart programming partner who knows a lot about coding but gets stuck with outdated information. Imagine if your coding buddy went to school years ago and never learned about any new programming techniques or updates since then - they’d still be helpful, but would often give you advice that no longer works.
That’s exactly what was happening with my AI coding tools. They were trained on old information and kept suggesting code that wouldn’t work with modern versions of the technologies I’m using. It was like asking someone from 2020 how to use a 2025 smartphone app - they’d have good general ideas but miss all the important recent changes.
To fix this, I set up a system that automatically feeds the AI the latest documentation and best practices. Now when I ask it to write code, it first checks the most current information before making suggestions. It’s like giving my coding buddy a direct line to the latest textbooks and manuals.
The Revenue Cat integration I completed is the payment system for my app - it’s what handles subscriptions when users want to pay for premium features.
Nerdy Details
The core issue with AI coding assistants is their knowledge cutoff dates. Most models like Claude, GPT-4, or Gemini have training data frozen at specific points in time, making them unreliable for rapidly evolving frameworks like Kotlin Multiplatform or RevenueCat.
Here’s how I solved this using Model Context Protocol (MCP):
Step 1: Documentation Embedding
# Clone the latest documentation repository
git clone https://github.com/google/kotlin-multiplatform-docs.git
# Install required dependencies
pip install openai sentence-transformers chromadb
Step 2: Create Vector Database
import chromadb
from sentence_transformers import SentenceTransformer
import os
# Initialize ChromaDB client
client = chromadb.Client()
collection = client.create_collection("kotlin_docs")
# Load and embed documentation
model = SentenceTransformer('all-MiniLM-L6-v2')
def embed_documentation(docs_path):
for root, dirs, files in os.walk(docs_path):
for file in files:
if file.endswith('.md'):
with open(os.path.join(root, file), 'r') as f:
content = f.read()
embedding = model.encode(content)
collection.add(
documents=[content],
embeddings=[embedding.tolist()],
ids=[file]
)
Step 3: MCP Server Configuration
{
"mcpServers": {
"kotlin-docs": {
"command": "python",
"args": ["mcp_server.py"],
"env": {
"CHROMA_DB_PATH": "./kotlin_docs_db"
}
}
}
}
Step 4: MCP Server Implementation
from mcp import Server, types
import chromadb
app = Server("kotlin-docs")
client = chromadb.Client()
collection = client.get_collection("kotlin_docs")
@app.list_tools()
async def handle_list_tools() -> list[types.Tool]:
return [
types.Tool(
name="search_docs",
description="Search Kotlin Multiplatform documentation",
inputSchema={
"type": "object",
"properties": {
"query": {"type": "string"}
}
}
)
]
@app.call_tool()
async def handle_call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "search_docs":
results = collection.query(
query_texts=[arguments["query"]],
n_results=3
)
return [types.TextContent(type="text", text=doc) for doc in results['documents'][0]]
Step 5: Cursor Integration In Cursor’s settings, add the MCP server configuration and enable auto-build verification:
// In your cursor prompts, include:
"Before implementing, search for the latest documentation using the search_docs tool. After making changes, run the build command and fix any compilation errors using the retrieved documentation."
This setup provides several advantages:
- Contextual Retrieval: Only relevant documentation is included in the context
- Always Current: Documentation stays up-to-date with your project dependencies
- Reduced Hallucination: AI has access to factual, current information
- Build Verification: Automatic compilation testing catches errors immediately
The key insight is treating AI models as powerful text processors that need external tools for current information, rather than omniscient coding partners.
Next Week
Next week marks the transition into the bug squashing and polishing phase. The app is mostly feature-complete, but there are still some gaps to fill. My focus will be on:
- Comprehensive exception handling improvements throughout the app
- User interface polish - currently it looks like an intern project and needs professional refinement
- Systematic bug hunting before releasing to the India Alpha Test Group
- General quality assurance and user experience improvements
There aren’t specific acceptance criteria defined for this phase since it’s more about iterative improvement and quality control rather than implementing new features.