Last Week
Database work consumed most of the week, bringing newfound respect for database engineers who navigate the treacherous waters between imperative and declarative paradigms. The Supabase diff tool’s quirks alone cost several days of development time.
The Database Declaration Dilemma
Understanding why databases resist declarative approaches requires grasping the fundamental difference between imperative and declarative programming.
Imperative: Give the computer step-by-step instructions (like handing an interior decorator a list of furniture movements) Declarative: Describe the desired outcome (like showing a floor plan)
While UI components can be declared (“show a logo at the top, login button at the bottom”), databases are inherently stateful – they exist to store data. You can’t just declare a desired state without considering existing data, like trying to redecorate a lived-in home without instructions for handling personal belongings.
What does it mean in English?
Imagine two ways to organize a room. You could write step-by-step instructions: “Move couch to corner, place table 5 inches from couch.” Or you could show a floor plan of how it should look. The second approach works great for hotel rooms that get reset daily, but not for your home full of personal items. Databases are like homes – they store important data that can’t just be rearranged without careful instructions about what to do with existing information.
Nerdy Details
The Migration File Problem
Traditional database management uses migration files – sequential instructions executed in order:
-- Migration 001: Create users table
CREATE TABLE users (
id UUID PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
-- Migration 002: Add phone number
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
-- Migration 003: Create restaurants table
CREATE TABLE restaurants (
id UUID PRIMARY KEY,
name VARCHAR(255) NOT NULL,
owner_id UUID REFERENCES users(id)
);
-- Migration 457: Add index for performance
CREATE INDEX idx_restaurants_owner ON restaurants(owner_id);
-- Migration 892: Add soft delete
ALTER TABLE users ADD COLUMN deleted_at TIMESTAMP;
After 1000 migrations, understanding the current schema requires mentally executing all migrations in sequence. It’s computationally expensive and cognitively overwhelming.
Supabase’s Declarative Approach
Supabase attempts to solve this by letting you declare the desired schema state:
-- schema.sql (declarative)
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email VARCHAR(255) UNIQUE NOT NULL,
phone VARCHAR(20),
deleted_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE restaurants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL,
owner_id UUID REFERENCES users(id) ON DELETE CASCADE
);
CREATE INDEX idx_restaurants_owner ON restaurants(owner_id);
The diff tool then generates migration files automatically by comparing current state to desired state. Sounds perfect, right?
The Hidden Dangers
The abstraction leaks when dealing with views and materialized views. Even innocent schema changes trigger destructive operations:
-- Original view
CREATE VIEW active_restaurants AS
SELECT r.*, u.email as owner_email
FROM restaurants r
JOIN users u ON r.owner_id = u.id
WHERE u.deleted_at IS NULL;
-- Modified view (just adding a column)
CREATE VIEW active_restaurants AS
SELECT r.*, u.email as owner_email, u.phone as owner_phone
FROM restaurants r
JOIN users u ON r.owner_id = u.id
WHERE u.deleted_at IS NULL;
The diff tool generates:
-- Generated migration (DESTRUCTIVE!)
DROP VIEW active_restaurants CASCADE; -- 🚨 Danger!
CREATE VIEW active_restaurants AS
SELECT r.*, u.email as owner_email, u.phone as owner_phone
FROM restaurants r
JOIN users u ON r.owner_id = u.id
WHERE u.deleted_at IS NULL;
This DROP CASCADE could destroy dependent views, functions, and potentially corrupt application state during deployment.
The Proper Workaround
Instead of relying on the diff tool for complex objects, manage them separately:
-- migrations/views.sql (manually managed)
CREATE OR REPLACE VIEW active_restaurants AS
SELECT
r.id,
r.name,
r.owner_id,
u.email as owner_email,
u.phone as owner_phone,
r.created_at
FROM restaurants r
JOIN users u ON r.owner_id = u.id
WHERE u.deleted_at IS NULL;
-- Use CREATE OR REPLACE to avoid drops
CI/CD Pipeline Guards
Essential safeguards to catch destructive operations:
# .github/workflows/database.yml
name: Database Safety Check
on:
pull_request:
paths:
- 'supabase/migrations/**'
jobs:
check-migrations:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Check for destructive operations
run: |
DESTRUCTIVE_KEYWORDS="DROP|TRUNCATE|DELETE FROM"
for file in supabase/migrations/*.sql; do
if grep -E "$DESTRUCTIVE_KEYWORDS" "$file"; then
echo "❌ Destructive operation found in $file"
echo "Manual review required for:"
grep -n -E "$DESTRUCTIVE_KEYWORDS" "$file"
exit 1
fi
done
- name: Validate schema consistency
run: |
npx supabase db diff --schema public
# Check if diff contains unexpected changes
if [ -n "$(npx supabase db diff --schema public)" ]; then
echo "⚠️ Schema differences detected"
npx supabase db diff --schema public
fi
Repository Integration Challenges
With the schema in place, the next challenge is integrating Supabase Postgres into the repository pattern:
// Data Transfer Objects (DTOs)
@Serializable
data class RestaurantDto(
val id: String,
val name: String,
val owner_id: String,
val created_at: String
)
// Domain models
data class Restaurant(
val id: RestaurantId,
val name: String,
val ownerId: UserId,
val createdAt: Instant
)
// Repository implementation
class SupabaseRestaurantRepository(
private val client: SupabaseClient
) : RestaurantRepository {
override suspend fun create(
name: String,
ownerId: UserId
): Result<Restaurant> {
return try {
val dto = RestaurantDto(
id = UUID.randomUUID().toString(),
name = name,
owner_id = ownerId.value,
created_at = Clock.System.now().toString()
)
val result = client
.from("restaurants")
.insert(dto)
.decodeSingle<RestaurantDto>()
Result.success(result.toDomain())
} catch (e: Exception) {
Result.failure(e)
}
}
override suspend fun findActiveByOwner(
ownerId: UserId
): Result<List<Restaurant>> {
return try {
// Using the view we created
val results = client
.from("active_restaurants")
.select()
.eq("owner_id", ownerId.value)
.decodeList<RestaurantDto>()
Result.success(results.map { it.toDomain() })
} catch (e: Exception) {
Result.failure(e)
}
}
}
Testing Strategy with 95% Coverage
Achieving 95% line coverage with database operations requires careful test architecture:
// Test configuration
class TestSupabaseConfig {
companion object {
private var container: PostgreSQLContainer<*>? = null
@JvmStatic
fun getTestClient(): SupabaseClient {
if (container == null) {
container = PostgreSQLContainer("postgres:15")
.withDatabaseName("test")
.withUsername("test")
.withPassword("test")
container!!.start()
// Run migrations
runMigrations(container!!.jdbcUrl)
}
return createSupabaseClient(
supabaseUrl = "http://localhost:${container!!.getMappedPort(8000)}",
supabaseKey = "test-key"
) {
install(Postgrest)
install(Auth)
}
}
}
}
// Repository tests
class RestaurantRepositoryTest {
private val client = TestSupabaseConfig.getTestClient()
private val repository = SupabaseRestaurantRepository(client)
@Test
fun `create restaurant with valid data succeeds`() = runTest {
val result = repository.create(
name = "Test Restaurant",
ownerId = UserId("test-user-id")
)
assertTrue(result.isSuccess)
assertEquals("Test Restaurant", result.getOrNull()?.name)
}
@Test
fun `create restaurant with duplicate name fails`() = runTest {
repository.create("Duplicate", UserId("user1"))
val result = repository.create("Duplicate", UserId("user2"))
assertTrue(result.isFailure)
assertTrue(result.exceptionOrNull() is DuplicateNameException)
}
@Test
fun `find active restaurants excludes soft deleted`() = runTest {
val ownerId = UserId("test-owner")
// Create active restaurant
repository.create("Active", ownerId)
// Create and soft delete another
val deleted = repository.create("Deleted", ownerId).getOrThrow()
repository.softDelete(deleted.id)
val result = repository.findActiveByOwner(ownerId)
assertTrue(result.isSuccess)
assertEquals(1, result.getOrNull()?.size)
assertEquals("Active", result.getOrNull()?.first()?.name)
}
}
Row-Level Security (RLS) Complications
Another layer of complexity emerges with RLS policies:
-- Enable RLS
ALTER TABLE restaurants ENABLE ROW LEVEL SECURITY;
-- Policy: Users can only see their own restaurants
CREATE POLICY "Users can view own restaurants"
ON restaurants FOR SELECT
USING (owner_id = auth.uid());
-- Policy: Users can only create restaurants for themselves
CREATE POLICY "Users can create own restaurants"
ON restaurants FOR INSERT
WITH CHECK (owner_id = auth.uid());
-- This breaks our tests! Need service role key for testing
Testing with RLS requires service role keys that bypass RLS:
class SupabaseRestaurantRepository(
private val client: SupabaseClient,
private val useServiceRole: Boolean = false
) : RestaurantRepository {
private suspend fun <T> executeWithRole(
block: suspend () -> T
): T {
return if (useServiceRole) {
// Use service role key for tests
withContext(Dispatchers.IO) {
client.withServiceRole {
block()
}
}
} else {
// Use regular auth for production
block()
}
}
override suspend fun create(
name: String,
ownerId: UserId
): Result<Restaurant> = executeWithRole {
// Repository logic here
}
}
Next Week
With the database schema battles mostly won, focus shifts to implementing the Supabase Postgres integration into the repository layer. This includes creating DTOs, mapping functions, and achieving 95% test coverage across domain and data layers. The presentation layer might have to wait another week given the testing requirements.
Every hour spent understanding database intricacies now prevents days of debugging mysterious data corruption later.