Last Week
Last week was all about tests – lots and lots of tests. I spent the majority of my time writing unit tests for the features I’d already implemented in my app. While it wasn’t the most exciting part of development (let’s be honest, writing tests rarely is), I’ve come to realize just how crucial they are, especially when using AI to assist with programming.
This realization has led me to seriously consider adopting Test-Driven Development (TDD) – yes, the practice where you write tests before writing the actual code. I know it sounds backwards to many developers, but in our AI-assisted development world, it might just be the key to maintaining code quality and preventing the subtle bugs that AI can introduce.
What does it mean in English?
Imagine you’re building a house. Normally, you’d build the house first, then check if everything works properly – does the plumbing work? Do the lights turn on? With Test-Driven Development, it’s like creating a detailed checklist of everything that needs to work before you start building. You write down: “When I flip this switch, the light should turn on” or “When I turn this faucet, water should come out.”
In my case, I’ve been using AI assistants (like ChatGPT or GitHub Copilot) to help write code faster. Think of these AI tools as super-smart interns – they can do a lot of work quickly, but they sometimes make mistakes. Without proper tests, these mistakes can hide in the code and cause problems later. By writing tests first, I’m giving the AI clear instructions about what the code should do, making it less likely to create bugs.
Nerdy Details
Test-Driven Development follows a simple cycle known as Red-Green-Refactor:
- Red: Write a test that fails (because the feature doesn’t exist yet)
- Green: Write the minimal code to make the test pass
- Refactor: Improve the code while keeping tests green
In Kotlin Multiplatform projects, TDD becomes even more powerful because you can write tests for both common code (shared across platforms) and platform-specific implementations. Here’s a practical example:
// commonTest/kotlin/com/example/cart/ShoppingCartTest.kt
import kotlin.test.Test
import kotlin.test.assertEquals
class ShoppingCartTest {
@Test
fun `should calculate total with tax`() {
// Given
val cart = ShoppingCart()
cart.addItem(CartItem(name = "Laptop", price = 1000.0))
cart.addItem(CartItem(name = "Mouse", price = 50.0))
// Then
assertEquals(1050.0, cart.getTotal())
assertEquals(1134.0, cart.getTotalWithTax(0.08))
}
@Test
fun `should handle empty cart`() {
// Given
val cart = ShoppingCart()
// Then
assertEquals(0.0, cart.getTotal())
assertEquals(0.0, cart.getTotalWithTax(0.08))
}
@Test
fun `should remove items from cart`() {
// Given
val cart = ShoppingCart()
val laptop = CartItem(id = "1", name = "Laptop", price = 1000.0)
cart.addItem(laptop)
// When
cart.removeItem("1")
// Then
assertEquals(0.0, cart.getTotal())
}
}
Now, here’s where AI comes into play. Your unit tests should be all the context the generative AI needs to take a stab at writing some code. You can provide these tests to an AI assistant with a prompt like:
"Based on these Kotlin unit tests, implement the ShoppingCart class
for a Kotlin Multiplatform project. Make sure all tests pass."
The AI will generate code that satisfies your tests:
// commonMain/kotlin/com/example/cart/ShoppingCart.kt
data class CartItem(
val id: String = "",
val name: String,
val price: Double
)
class ShoppingCart {
private val items = mutableListOf<CartItem>()
fun addItem(item: CartItem) {
items.add(item)
}
fun removeItem(id: String) {
items.removeAll { it.id == id }
}
fun getTotal(): Double {
return items.sumOf { it.price }
}
fun getTotalWithTax(taxRate: Double): Double {
val subtotal = getTotal()
return subtotal + (subtotal * taxRate)
}
}
Establishing a strong quality assurance process, including automated testing and peer reviews, helps ensure that AI-generated code meets high standards and minimizes the introduction of bugs.
For more complex KMP scenarios with platform-specific implementations, you might test a repository pattern:
// commonTest/kotlin/com/example/data/PostRepositoryTest.kt
import kotlinx.coroutines.test.runTest
import kotlin.test.Test
import kotlin.test.assertEquals
import kotlin.test.assertTrue
class PostRepositoryTest {
@Test
fun `should fetch posts from API`() = runTest {
// Given
val mockApi = FakePostApi()
val repository = PostRepository(mockApi)
// When
val result = repository.getPosts()
// Then
assertTrue(result.isSuccess)
assertEquals(10, result.getOrNull()?.size)
}
@Test
fun `should handle network errors gracefully`() = runTest {
// Given
val mockApi = FakePostApi(shouldFail = true)
val repository = PostRepository(mockApi)
// When
val result = repository.getPosts()
// Then
assertTrue(result.isFailure)
assertTrue(result.exceptionOrNull() is NetworkException)
}
}
For AI-assisted development, consider these best practices:
- Write comprehensive tests first: Include edge cases, error scenarios, and happy paths
- Use tests as AI context: Test-driven development provides a framework for code generation that acts as user-defined, context-specific “guard rails” for your model or assistant.
- Validate AI output: Always run the generated code through your test suite
- Iterate based on test failures: If tests fail, provide the error messages back to the AI for corrections
For platform-specific testing in KMP, you can leverage platform test sets:
// androidTest/kotlin/com/example/AndroidSpecificTest.kt
import androidx.test.ext.junit.runners.AndroidJUnit4
import org.junit.runner.RunWith
import kotlin.test.Test
@RunWith(AndroidJUnit4::class)
class AndroidDatabaseTest {
@Test
fun testRoomDatabase() {
// Android-specific Room database testing
}
}
// iosTest/kotlin/com/example/IosSpecificTest.kt
import platform.Foundation.*
import kotlin.test.Test
class IosKeychainTest {
@Test
fun testKeychainStorage() {
// iOS-specific Keychain testing
}
}
AI tools can enhance productivity by automating repetitive tasks like code generation, testing, and debugging, letting developers focus on critical tasks. The key is combining AI with TDD practices to ensure quality.
For setting up automated testing in your KMP CI/CD pipeline:
# .github/workflows/kmp-tests.yml
name: KMP Tests
on: [push, pull_request]
jobs:
test:
runs-on: macos-latest
steps:
- uses: actions/checkout@v3
- name: Set up JDK
uses: actions/setup-java@v3
with:
java-version: '17'
distribution: 'temurin'
- name: Run common tests
run: ./gradlew :shared:testDebugUnitTest
- name: Run Android tests
run: ./gradlew :shared:testDebugUnitTestAndroid
- name: Run iOS tests
run: ./gradlew :shared:iosSimulatorArm64Test
- name: Generate test report
run: ./gradlew :shared:koverHtmlReport
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: ./shared/build/reports/kover/html/index.html
When using AI for KMP development, provide context about the multiplatform nature:
"Create a Kotlin Multiplatform UserPreferences class that:
- Uses DataStore on Android
- Uses NSUserDefaults on iOS
- Has a common interface for getting/setting preferences
Include unit tests for the common code."
This approach ensures that AI generates platform-aware code while maintaining testability across all targets.
Next Week
I’ll be traveling for the next week, so not much coding will happen. However, when I return in two weeks, I have clear goals:
Increase test coverage: Currently nowhere near the 80-90% coverage that’s typically considered the standard. I’ll work on writing tests for all the untested parts of the codebase.
Implement TDD workflow: Start practicing true test-driven development by writing tests before implementing new features.
Prepare for alpha release: With comprehensive test coverage in place, I’ll be ready to release the first alpha build with confidence that the AI-assisted code is robust and reliable.
The journey from “tests are boring” to “tests are essential” has been eye-opening, especially in the context of AI-assisted development. When AI writes code at superhuman speed, tests become our safety net – ensuring that speed doesn’t come at the cost of quality.